Fooling an AI-Powered Surveillance Camera with a Simple Pattern 'Trick'

With artificial intelligence being heavily integrated into surveillance systems, the power that the government and companies hold regarding your anonymity is being questioned. Artificial intelligence surveillance cameras can differentiate between people and objects, making it easier to identify individuals. Although this sort of technology could be put to good use, many fear that advancements in such capable software breaches anonymity.

However, researchers across the world prove over and over that no matter how advance our artificial intelligence-based systems may be, they can still be tricked. In fact, such tricks include being able to essentially go ‘off the radar’ to a surveillance camera integrated with object-recognition technology.

Researchers at the University of KU, Belgium, have shared their latest work on a trick that they used to fool an AI-powered surveillance camera. The ‘trick’ was rather simple, it did not involve the use of any complex algorithm or a group of individuals working tirelessly in order to fool the system. All it took to fool the surveillance camera was the use of a specific pattern on a person, which the camera was unable to pick up through the algorithm. This essentially meant that the person was not detected by the camera, almost as if they were invisible.

By printing off the specially designed pattern and placing it around the neck area, the AI-powered surveillance camera is incapable of recognising that a person is within a radius. The researchers wrote: “we believe that, if we combine this technique with a sophisticated clothing simulation, we can design a T-shirt print that can make a person virtually invisible for automatic surveillance cameras.”

AI-camera fooled by simple pattern ‘trick’

AI-camera fooled by simple pattern ‘trick’

Although this may be unheard of or seem somewhat unusual, it is in fact a well known phenomenon within the world of artificial intelligence. These specific type of patterns take advantage of the non-flexible intelligence of computer vision systems in order to fool them into believing that they are not picking up on what appears to be a walking, live person present in front of them.

However, despite this, a lot researchers have warned against doing this as fooling a surveillance camera is not all fun and games when it could be used to jeopardise the lives of people on a daily basis. This sort of research could be used to fool self-driving cars into mistakenly recognising a person as a green light, for example. Other examples of fooling an AI system is tricking medical AI vision systems that are designed to identify diseases.

Though this research does look somewhat promising in that it does ensure us that public anonymity is still existent. It means that people do not have to worry that artificially intelligent surveillance cameras are advanced enough to be completely fool-proof (yet). Still, like previously mentioned, this sort of trickery could lead to more serious instances where computer systems are being fooled in order to purposely cause harm to someone, be it indirectly or directly.

This sort of research could promote dangerous activity like fooling self-driving cars

This sort of research could promote dangerous activity like fooling self-driving cars

Zacharia Sharif