Did Google Exploit Homeless Black People For Their Facial Recognition AI?
The data collection phase is a very important stage in the creation of artificial intelligence algorithms. Unfortunately, it is also where the biggest ethical issues occur since peoples' data would be stored and used to train the AI. Google has recently come under fire after The New York Daily News reported that the contractor working on behalf of the tech giant were using unethical methods to collect data to improve Google's facial recognition AI.
The contractors asked black people to play a "selfie game" and were offered $5 in return. The game was played on the contractor's mobile phones and involved the users to follow a dot on a screen among other tasks. Little did they know, however, the phone was actually capturing images of the users' faces while they were playing, which would, later on, be used to train the AI.
The people approached by the contractors were homeless or students and many of them said that they were unaware of the data collection when they signed the legal disclaimer. According to the report, Google told the contractors to target homeless people as they "didn’t know what was going on" and as they were “the least likely to say anything to the media.”
Google has responded to these claims of unethical methods and said that the reason for collecting data on this demographic was due to the underrepresentation of darker toned faces in image datasets, which leads to bias. This was being done to refine the face unlock feature on Google's upcoming Pixel 4 smartphone.
While the premise of this project is rather virtuous, Google failed to make it clear to the users who played their 'game' that photos of them were being taken. This lack of transparency has received lots of backlash and Google has since gone to suspend the project and have released a statement: