Is Facial Recognition Technology a Step Too Far?
On March 22, a hearing was held in the House Committee on Oversight and Government Reform to review the Federal Bureau of Investigations’s use of facial recognition technology and other programmes.
Facial recognition is a technology that utilises an algorithm in analysing a subject’s facial features. Law enforcement across the country have used this technology to verify the identity of a person they are analysing to discern a potential suspect in their investigation.
The FBI have two facial recognition programmes: Next Generation Identification Interstate Photo System (NGI-IPS) and Facial Analysis, Comparison, and Evaluation Services (FACE Services).

The left image is the NGI-IPS programme which has been stated that it ‘hosts a database of 24.9 million mugshots’. The right image is of FACE Services which has over 411.9 million photos, varying from driver’s licenses, passports or visas. Image Credit: Google.
A development of the Government Accountability Office was also brought up during the hearing revealing that the Department of Justice published a privacy assessment in 2008 for the NGI-IPS programme. However, according to a report, the FBI failed to update the assessment after the programme underwent “significant changes.” A privacy impact assessment is an audit that assists organisations in identifying and managing privacy risks arising.
House Oversight Committee Chairman Jason Chaffetz (R-Utah) questioned Kimberly Del Graco, Deputy Assistant Director, Criminal Justice Information Services Division, on why the FBI failed to publish the privacy impact assessment for the FACE Services programme publicly for years, yet still implemented facial recognition technology for real-world applications.
You’re required by law to put out a privacy statement and you didn’t. And now we’re supposed to trust you with hundred of millions of people’s faces in a system...
The privacy assessment for the NGI-IPS and FACE Services can be viewed here and here respectively.
The latest trend of facial recognition, the FaceApp, took the internet by storm. FaceApp Inc., a mobile application (iOS/Android) created by Russian company Wireless Labs, used AI to create frighteningly realistic transformations of photographs of faces.

Want to look older? Younger? Change gender? This application (FaceApp) can do that, and while it may be fun to see what you could look like in 20 to 30 years, the fact is that this kind of information can be decidedly detrimental to one’s privacy. Image Credit: Google.
A recent study published by Georgetown Law’s Centre for Privacy and Technology titled, “The Perpetual Line-Up,” states that 117 million, or 1 in 2 American adults are affected by having their photo in a facial recognition database.
The real issue is the privacy and protection of the users’ data. Reluctance in using this kind of service is to be expected after the repercussions of the Cambridge Analytica scam in which thousands of people had personal data misused after they answered a fun personality quiz online. Individuals became alert as to how their data was being accessed, used or sold to someone else.
Examples are Ever, a picture storage app, that was utilising users’ pictures to train facial recognition software which was then sold to law enforcement. Or IBM using Flickr photos to teach facial recognition apps without permission, and PopSugar’s application, which enabled the pictures of a user to be publicly available on an unsecured web address where the photos were stored.

FaceApps privacy statement, even though some privacy statements say, “we will not rent or sell your information to third parties outside FaceApp,” the information is shared with “third-party organisations.” Image Credit: Google
Facebook is also facing trouble over its facial recognition technology. A class-action suit has been filed in the federal courts over Facebook’s app violating users’ privacy.
Facebook had a facial recognition lawsuit when the U.S Court of Appeals affirmed a lower court’s ruling certifying a class action alleging Facebook’s “Tag Suggestions” a facial recognition programme, which violated the Illinois Biometric Information Privacy Act (BIPA). BIPA, the only law allowing private individuals to file a lawsuit for damages for violation of the Act, passed in 2008, protects against unlawful collection and storing of biometric information.
In addition to this, Facebook faced growing pressure relating to other alleged privacy violations. The staggering $5 billion fine with the Federal Trade Commission (FTC) levied against Facebook in relation to consumer privacy violations.
Statements made by Facebook over the past several years in regards to facial recognition tools and use of data have supposedly been misleading, and the US Securities and Exchange Commission (SEC) and FTC have fined Facebook for their exploitative practices aimed at data mining information that is not theirs to share.
Privacy may be in jeopardy if facial recognition technology is used in real-world applications. Facial recognition could be enacted in CCTVs to monitor for potential criminal activities and on government level, facial recognition can help identify any terrorist. An additional bonus is that one cannot hack the technology as there is nothing to steal or change, like in a case of a password and can be used as a security tool for locking personal devices. However, it may deter people from protesting or marching. Police officers use body cameras for transparency but, having them equipped with real time facial recognition could instigate fear in people due to constant surveillance.
The FBI has tested and verified that the NGI FR Solution returns the correct candidate a minimum of 85 percent of the time within the top 50 candidates.
This leaves a 15 percent chance of failure to properly recognise the correct suspect and even gives an opportunity for someone to be wrongfully convicted.
The concern in this case, and in others of a similar nature, is that advancing facial recognition technology and data are being unregulated and are so sensitive that the public welfare needs to be protected by a stature such as BIPA and that the stature needs to be rigorously reinforced.