Increasing Liabilities of AI
With the proliferation of machine learning and predictive analytics, the FTC should make use of its unfairness authority to tackle discriminatory algorithms and practices in the economy.
This statement was released from FTC Commissioner Rohit Chopra towards the end of May. The fact that these words were followed by a more formal blogpost from the regulator focused on AI - amidst a global pandemic - highlight what is becoming of today. Liabilities on the use of algorithmic decision-making are increasing. This holds true regardless of any new federal regulations on AI.
For those who notice the rapid adoption of artificial intelligence will also have noticed that the increasing liability of algorithmic decision-making systems, which usually incorporates AI and machine learning, also derives from a newer development: the longer regulators wait in discussing new regulations for AI, there will be a rise in the use of AI. In this process, the detrimental effects that technology can have are becoming startlingly clear.
For instance, automated screening systems for tenants, which the publication The Markup recently uploaded have been plagued by inaccuracies that have generated millions of dollars in lawsuits and fines.
With about half of the nation’s 43 million rentals turning over every year, even an error rate of 1 percent could upend the lives of hundreds of thousands of people.
Among the unfortunate are a family, for example, Hector Hernandez-Garcia who, alongside his wife and newborn son, became temporarily homeless after being mistakenly profiled by such an algorithm. Hernandez-Garcia sued; the company settled.
Another precedent was the Michigan Integrated Data Automated System, used by the state to monitor filing for unemployment benefits, which was also recently claimed to have falsely accused thousands of citizens of fraud. Class action lawsuits have been filed against the state, professing a myriad of problems with the system that's used and demonstrating how automated systems induce harm that are hard-to-detect.
Furthermore, the recent lawsuit against Clearview AI, filed in Illinois near the end of May by the ACLU (American Civil Liberties Union) and a leading privacy class action law firm, alleging that the company’s algorithms breached the state’s Biometric Information Privacy Act. The Act limits the way data namely fingerprints and facial images can be used, with a fine of up to $5,000 per infringement, which other states have sought to imitate in recent years.
The U. K and Australian information commissioners have announced a joint probe into the controversial “data scraping” practices of facial recognition company Clearview AI. This system scrapes social media sites like Facebook and Twitter for images of people’s faces. Image Credit: BuzzfeedNews
In other words, lawsuits, fines and other liabilities created by artificial intelligence are just getting longer and longer. The non-profit Partnership on AI even recently released an AI incident database track how models can be misused or go awry.
All of which means that organisations adopting AI are creating concrete liabilities in the process. Indeed these damages are becoming more frequent and apparent to both regulators and customers alike everyday. With the occurrence of the global pandemic, more pressure is directed to organisations to embrace the modern technology which is likely to accelerate the use of AI even more.
So what can companies do?
The first answer is to always have methods in place for when AI causes harm. There is an escalating field of AI incident response - similar to traditional cybersecurity incident response - focused on fabricating plans on how to react and minimise the effects when an algorithm misbehaves. This type of algorithmic misbehaviour might have internal causes (when the data the AI was trained on differs too widely to real-life applications) or external causes (when a hacker attempts to manipulate the algorithm).
On the left it displays the internal consumers who would be affected by this algorithmic misbehaviour and the right displays the external consumers who would be affected. Image Credit: bmcblogs
Whatever the cause, there are a range of materials lawyers can utilise to assist their organisations to prepare for these cases should they occur, like this series of articles focused on legal planning for the adoption of AI.
The second suggestion is asking all the right questions to mitigate major risks before they emerge. To aid lawyers in this role, a boutique law firm (focused on AI and analytics), bnh.ai, teamed up with a non-profit organisation (Future of Privacy Forum) to deliver a set of 10 questions called “10 Questions on AI Risk” last month. These questions help guide lawyers as they seek to understand key areas of liability generated by AI.
The last, and certainly most important aspect that companies should aim to do is not wait until the incident occurs to address the risks. When incidents do occur, for instance, where it's not simply regulators or plaintiffs who scrutinise the affair but the whole entire system in which the incident took place. This means that reasonable practices for security, privacy, aiding, documentation, testing and more all have a key role in diminishing the dangers of AI. Once the danger transpires, it’s customarily too late to avoid the most serious damages.
An ounce of prevention, to quote the old proverb, is worth a pound of cure. And that’s true now more than ever for organisations adopting AI.
Article Thumbnail Credit: https://www.law.com/legaltechnews/2018/03/29/the-role-of-artificial-intelligence-in-legal-operations/