The State of Superintelligent AI

The Matrix, Terminator, Iron Man – the list of films in which some type of AI suddenly goes crazy and tries to take over the planet is long. The idea has been around since the 50s, but it seems we’re getting closer to a world where computers are as smart as humans or even smarter. The narrative of a robot revolution or ‘AI taking over the world’ is so overused that it has become a cliché. The question is: what will that look like? Will the machines be able to rebel against us? Will we put AI into humanoid robots? Will the internet start to think? Professor Nick Bostrom, a faculty of Oxford University and director of Future of Humanity Institute calls this phenomenon superintelligence. Now, superintelligence is what it sounds like- a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. But is it out of our control? Artificial intelligence is getting smarter by leaps and bound - within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values -- or will they have values of their own?

Robin Williams as Andrew Martin in the film Bicentennial Man (1999). Based on a truly great story by Isaac Asimov, the plot revolves around an NDR android servant of the Martin family that seeks to become human. Copyright: © 2015 Disney Enterprises, Inc.

Robin Williams as Andrew Martin in the film Bicentennial Man (1999). Based on a truly great story by Isaac Asimov, the plot revolves around an NDR android servant of the Martin family that seeks to become human. Copyright: © 2015 Disney Enterprises, Inc.

Before jumping into any conclusions, we must learn about the current state of AI. 3 lessons about the state of artificial intelligence to show you it’s up to all of us to make our future a good one: (1.) AI used to be limited by hardware, but now it’s mostly a haul of knowledge. (2.) There are two different ways to design super intelligent computers. (3.) Superintelligence must be the result of global collaboration, not some secret government program, or we’re screwed. Ever since the legendary scientist Alan Turing invented “Turing Machine” as the first device to systematically follow and execute instructions in an automated way, computer scientists have been wondering how we will get these machines to truly think like us. The Dartmouth Summer research project on AI was one among the first proper workshops in this area in 1956, and although successive few years showed some results, like machines solving math problems or writing music, AI soon hit its limit – the hardware simply didn’t suffice to process all the necessary information for really complex tasks. It took until the ‘80s for the hardware to finally catch up and the ‘90s to gather up the knowledge to build an AI. Now modelled a lot more after neural systems within the brain and human genetics, AI has made its way pretty far into our daily lives, with smartphones and Google, for instance. Now you may know that AI’s were able to beat humans in chess (Gary Kasparov defeated by Deep Blue) and quizzes like Jeopardy (IBM Watson beats champions Ken Jennings and Brad Rutter)- but there’s a catch on that. Those AI’s were custom built; they were specifically built to ace that particular sector to serve a purpose.

What we’re currently doing with AI is generally teaching computers to imitate human thinking. Computers use logic to navigate a wealth of information, calculate probabilities then take shortcuts humans can’t come up with to imitate their behaviour – just faster. As described above, this needs access to tons of data in real-time and that’s a drag. an alternate would be to get computers to simulate the human brain, not just imitate it. this is called WBE – whole brain emulation – and would lead to a computer that’s sort of a child: equipped with basic information about the planet and therefore the ability to find out the rest on its own. to achieve this, we don’t even need to decode the whole human brain, we just need to be able to copy it. However, this might require us to take actual human brains, get the information out of them, and upload it somehow. feels like Minority Report? Well, that’s also about how far it’s away.

Just like there are two ways to technologically make superintelligence a reality, there are also two socially alternative ways it may be developed. One is again very similar to what you see during a lot of movies: some secret government unit or program toils away behind closed curtains for many years until it emerges with a brand-new piece of highly superior technology – you recognize, something just like the A-bomb. In this scenario, a little group would produce one, single, super-intelligent machine. this might give that country a strategic advantage over all others – but it’d even be a problem. Because if only 1 unit exists, it only takes one set of evil hands to wipe out our entire species. And if something goes wrong, there aren’t enough folks that have skills to repair it either. The only way it can work is that the second scenario: a worldwide collaboration to gradually develop superintelligence, supported humankind working together in unison. Such a team effort would ensure all steps taken are the safest ones, because many parties and also the public control the project, developing safety regulations along the way. it would not be as fast, but it’s sure as hell safer.

The most sophisticated machines created so far are intelligent in only a limited sense. Only with the researches guided and managed within a strict ethical framework, we can make a superintelligent AI. Our ethics will help us to balance risks and to engineer a superintelligence with preferences that make it friendly to humans or able to be controlled.

References-

  1. Superintelligence: Paths, Dangers, Strategies By Nick Bostrom; Oxford University Press, 2014

  2. TEDTalk by Nick Bostrom, TED2015 (https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are)

Thumbnail credit: Valeriy Kachaev / Alamy Stock Vector