AI in Producing Fake Images
Machine learning algorithms have come a long way. Over the course of several years, artificial intelligent powered voice assistants such as Siri, Alexa, and Google Assistant have improved significantly. It is quite hard to understand how far we have come in terms of our research and knowledge on artificial intelligence. We’re surrounded by artificial intelligence so it’s no surprise that we don’t seem to recognise the may leaps we have made in recent years. However, recently, two images show the huge advances machine learning has made and show why we’re in for a new age of mischief and online fakery.
The first image was presented by Ian Goodfellow, the director of machine learning at Apple’s Special Projects Group. Each of the faces below was generated by an AI. Starting with the faces on the left, from 2014, you can see just how much artificial intelligence has advanced over the years and how much more reliant and capable it has become.
The popularity and realisation amongst tech giants regarding the potentiality and benefits that integrating artificial intelligence into their systems introduces, has lead to a surge in money invested into research. Not only that, but we’ve seen the rise of many artificial intelligent start-up businesses who offer applications of artificial intelligence in virtually all fields.
In 2014, machine learning algorithms were only capable of generating faces that generated looked grainy, like something you might see on a low-quality surveillance camera. And they looked generic, like an average of lots of human faces, not like a real person.
Artificial Intelligence has come a long way but with advancements in fields like fake images and videos, artificial intelligence can be used to easily manipulate stories and spread false news. This is bad news especially when we live in an era where we are already combating against the spread of false and misleading information.