The Weakness of AI Surveillance

It has been known for a long time that AI and surveillance systems go well together, in fact, China is implementing this very idea as we speak. The Chinese social credit system links a persons face to their records and if they see them committing a minor crime and detect their face, this will automatically lower a score that is linked to how beneficial they are for the government, *Cough*, I mean society.

The whole system entirely depends on whether or not the system can accurately detect people's faces without getting 2 similar-looking people confused or not detecting a face at all, however, over time it has become clear that it is very easy to confuse neural networks if you know you're doing.

Each pixel on the network has a series of weights that contribute to the detection, however, you can use this to mislead the system. If you design a specific image that can use those exact weights and maximise the output of each pixel in a specific way, confusing the system. For example, say there's an AI system that can detect people walking, you can make a picture that is designed in a way that each pixel detected by the camera on the picture lines up with the weighting of the pixel to produce a very strong signal saying there is no one here and then you can walk through without being picked up because the network is convinced no one is there because of the strong signal.

Direct encoding roughly means breaking the system pixel by pixel, while indirect encoding means breaking the system with patterns that are familiar to the system.

Direct encoding roughly means breaking the system pixel by pixel, while indirect encoding means breaking the system with patterns that are familiar to the system.

In fact, this has been shown time and time again with systems that try and detect certain objects. If AI isn't constantly adapting, then on the dark net pictures could emerge that can throw off cameras and some criminal could use that to confuse the system and break into your house. This is especially easy if the detection system is accessible to the public. Adversarial networks can be trained easily that input an image into the network and see how it reacts to figure out potential images that mislead it.

But is it all harm? In China, the technology is being used to track people of certain ethnicity, if someone from that ethnicity decides to fool the system to walk around freely, is really doing something bad? In the future, more governments could hold this strategy for controlling the population, at which point this technology would actually be the saviour for those who wish to be free.