UK Passport Scanner Deemed 'Racist'
The Home Office has being briefed multiple times over AI-integrated facial recognition technologies in passport scanners. This is due to a report that suggests that the lighter your skin is, the more advantageous it is for you to utilise facial recognition technologies in airports. It seems that artificial intelligence is racially biased and the government doesn’t seem to be paying as much attention to it as it should be.
The government has put heavy emphasis and research and investments being put into improving the efficiency of border control and immigration at airports across the United Kingdom by implementing these smart-AI face scanner systems. With advanced three-dimensional facial scanning technology, these AI’s are able to identify an individual to the very detail. It makes life easier for frequent travellers and improves overall airport experience. It’s as simple as placing your passport in the designated area and looking at a camera and walking straight through the gates.
However, as efficient and beneficial it may be to both the airport and traveller, there has been research to suggest that the the lighter your skin, the better equipped AI based facial recognition systems are for you. The United Kingdom launched a passport program in 2016 in which users submit a passport photo and complete the process online.
Since its inception, many people of colour have reported various issues when sending in their passport photos, issues which doesn’t seem to be apparent amongst white people. Problems include but are not limited to: the systems inability to recognise open eyes or a shut mouth.
It turns out that artificial intelligence is surprisingly good at being racist because racism is systemic - its inability to group small amounts of diverse data. However, this facial recognition system is not the only technology under fire for being ‘racist’. In fact, tech giants like Google and Amazon have faced controversy and criticism. With Google exploiting black homeless people and Amazon selling its racially biased recognition software to law enforcement. If these two tech giants haven’t figured out the underlying cause behind racism in AI, what makes you think travellers are going to have an easy time being certain that the UK government has systemic racism in AI under control?
With artificial intelligence on the rise and with it being implemented heavily in almost every aspect of our lives, corporations need to investigate further into risks associated with AI based facial recognition systems such as systemic racism.