How to Combat the Dangers of Superintelligent AI

The prospect of limitless expansion granted by the development of AI is certainly an inviting one. While investment and interest in the field only grows by every passing year, we can only imagine what we might have to come.

Dreams of technological utopias granted by super-intelligent computers are contrasted with those of an AI lead dystopia, and with many top researchers believing we will see the arrival of AGI within the century, it is down to the actions we take now to influence which future we see. While some believe that only Luddites worry about the power AI could one-day hold over humanity, the reality is that most top AI academics carry a similar concern for its more grim potential.

 

image credit: InfoWorld

 

We can't get a second attempt at Powerful AI. Unlike other groundbreaking developments for humanity, if we go wrong there is no opportunity to try again and learn from our mistakes. So what can we do to ensure we get it right the first time?

The trick to securing our ideal AI utopia is ensuring that their goals do not become misaligned with our own; AI would not become "evil" in the same sense that many fear, the real issue is it making sure it could understand our intentions and goals. AI is remarkably good at doing what we tell it, but when given free rein, it will often achieve the goal we set in a way we never expected. Without proper preparation, a well-intended instruction could lead to catastrophic events, perhaps due to an unforeseen side effect, or in a more extreme example, the AI could even see humans as a threat to fully completing the task set.

The potential benefits of super-intelligent AI are so limitless that there is no question in the continued development towards it. However, to prevent AGI from being a threat to humanity, we need to invest in AI safety research. In this race, we must learn how to effectively control a powerful AI before we learn how to make it.

Thumbnail Source: IEEE Spectrum