The Danger of 'Deepfakes' - A New Wave of AI-Enabled Cybercrime
The growth of the AI field walks hand in hand with its integration into modern society. From the virtual assistants sitting on our shelves and resting in our pockets, to the hidden AI working to tell us the fastest route to the cinema and verifying our identity when we walk through ePassport gates, it is clear to see AI is becoming ingrained in almost every element of our online and personal interactions. However, AI isn't always used for good.
A recent study published by the Crime Science Journal, funded by the Dawes Centre for Future Crime at UCL looked into the future of AI-enabled crime, the threats it poses, and how they might be stopped. When looking into the potential for harm and profit, as well as how easily achievable and hard to prevent these AI crimes would be, the threat of video and audio impersonation rose above all others.
A ranking of different AI-enabled crimes from the study.
What are deepfakes?
Photographic evidence is no longer as infallible as it once was. It's difficult for us to distrust something we can see with our very own eyes, but now, thanks to the AI-based technology of Deepfakes, convincing images, videos and even audio content can be created of anyone doing practically anything. Using GANs (General Adversarial Networks) an incredibly realistic forgery can be created, as one machine learning model creates the deepfakes, while another works to detect them. These models will keep working till the second can no longer detect that the footage is fake, creating a final product that is practically indistinguishable from the real thing.
As an example of Deepfakes, here you can see the tech used to modify an actress in the movie Man of Steel, to have the face of actor Nicholas Cage.
While many people use Deepfakes to paste Nicholas Cage's face into as many films as they can, or to make their favourite celebrity say something funny, this AI forgery has worrying potential for malicious use. While Deepfakes currently can be made following a set script, it is expected that they will eventually develop the capability to be interactive. This could be used to trick parents into giving login or personal details to (what they believe to be) their ever forgetful child, to forge fake blackmail, and to release videos of a political candidate saying campaign-wrecking statements, to name a few. Just the existence of Deepfakes would bring question to the legitimacy of any visual evidence in cases as we could no longer be certain what we see is true, undermining what is nowadays considered reliable and often case-winning evidence.
But how can Deepfakes be stopped? Unfortunately, part of the reason for Deepfakes being considered such a great threat is that they're very difficult to detect. Due to the use of GANs in the production of Deepfakes, they're engineered to be as hard to spot as possible. Even with improvements in Deepfake detection, the intricacy of the Deepfakes produced will also be increasing, meaning that the issue of spotting fake footage becomes a "Moving Goal Post" problem; while our ability to spot fakes improves, so will the complexity of those fakes. Ultimately, the study claims that a drastic change in the way we perceive and trust in visual evidence might be the only effective defence we have against this new threat.
For further reading on this study: https://crimesciencejournal.biomedcentral.com/articles/10.1186/s40163-020-00123-8#Fig2
Thumbnail Credit: PLUME CREATIVE/GETTY IMAGES