The War on Deepfakes: AI and Neural Networks Can Now Detect Fake Images

Deepfake has taken the world by storm with companies like Samsung bringing the Mona Lisa alive to apps such as FaceApp that show you how you will look when you are older. If there is one thing to take away from all of these advancements, it is that deepfakes are becoming more and more realistic especially due to increases in investment for AI by huge corporations like Microsoft. This technology can be used for fun but is worrying many people around the planet due to its dangerous applications. In June, deepfake videos of Mark Zuckerberg and Kim Kardashian were released to showcase how realistic these fake videos are and it brings up the fact that powerful figures, such as politicians, can be deepfaked and be shown to say things they would never say. The problem does not end with notable people, however, as 'normal' people like you and I can also be deepfaked. The key problem with deepfakes is that humans can find it hard to distinguish between a real and fake video, so we might not be able to tell a deepfake is a deepfake, but this is where Prof. Amit K. Roy-Chowdhury comes in.

It’s a challenging problem, this is kind of a cat and mouse game. This whole area of cybersecurity is in some ways trying to find better defense mechanisms, but then the attacker also finds better mechanisms.
— Prof. Amit K. Roy-Chodhury, Professor at University of California, Riverside

The professor led a team of researchers from the University of California, Riverside, to develop a deep neural network that is made up long short-term memory (LTSM) cells and an encoder-decoder network that will be able to detect deepfake photos. The training data that was fed to the neural network consisted of real and manipulated images. After the training phase, the neural network was given new images and the researchers say it was able to spot deepfakes "most of the time". Another feature of this ingenious neural network is that it is able to "segment out manipulated regions from non-manipulated ones", which will mean that humans can see what part of the image was altered.

The reason that this neural network can detect the fake images is by looking at the boundaries between different objects in the image. When someone Photoshops an image, they might add or remove objects and to make them look they were originally part of the image, they smooth the boundaries between the objects and the rest of the image. This makes the photo look normal to the naked eye, but a computer can detect the abnormalities in the pixels at the parts of the image that were artificially added.

Real and fake photos of Meghan Trainor. It is very difficult to tell which one is real and fake if we don’t look at the labels. (Source: brooke candyxox/YouTube)

Real and fake photos of Meghan Trainor. It is very difficult to tell which one is real and fake if we don’t look at the labels. (Source: brooke candyxox/YouTube)

Even though the results of the research were satisfactory and proved that neural networks might be the answer to help end harmful deepfakes, there is still more work to be done. As of right now, only images can be tested out, but in the future, the researchers wish to detect deepfake videos. Prof. Roy-Chowdhury that the same concept can be applied for videos.

if you can understand the characteristics in a still image, in a video it’s basically just putting still images together one after another
— Prof. Roy-Chowdhury

As promising as this research is, Prof. Roy-Chowdhury said that automated deepfake detection may not be possible in the near future and said “if you want to look at everything that’s on the internet, a human can’t do it on the one hand, and an automated system probably can’t do it reliably. So it has to be a mix of the two.” Nevertheless, neural networks are definitely going to be worthy contenders in the war to end deepfakes.

Thumbnail Source: Truth Syrup/YouTube; ScreenSlam/YouTube