The Complexity Of Bias

Bias in algorithms is not a new topic of interest, it is one that researchers have been tackling for the last few years. As technology and artificial intelligence rapidly develop and are increasing integrated into our daily lives, the effect that a minor bias may have when an algorithm is first run can have profound effects when used by the wider community regularly and could cause less girls to apply for STEM jobs, less black prisoners to be allowed on parole or less minority candidates to pass rounds in a job interview. But the complexity of bias proves to be larger than what the statistics represent.

In the US, the COMPAS algorithm is used to make judgements on whether a criminal is likely to reoffend or not, using data such as age of arrest, sentence length, education level and an interview, then assigning a score between 1 and 10 showing their likelihood to reoffend. A few years ago, a research was conducted stating that clear bias existed within the algorithm, as the false positives (which is when an offender is classified as likely to offend again when in reality they wouldn’t have) was much higher for black offenders (46.9%) than white offenders (23.5%). However when the COMPAS creators responded about their rate of recidivism, they stated that the algorithm should be judged on the quality of the predictions for both races, so although more black offenders were predicted as high risk, 63.0% of black offenders and 59.1% of white offenders went on to reoffend, the similarities of these statistics implying that the algorithm was properly calibrated for both white and black defendants.

So which one of these arguments is true? A study conducted by researchers stated that the COMPAS algorithm gave equally good predictions independent of race. They showed that if the algorithm is equally reliable for two groups, and chances are that one group is more likely to reoffend than the other, then it is impossible for the false positive rates to be the same for both groups. If black defendants reoffend more frequently , then they have a larger probability of being incorrectly placed in a higher risk category. Any other result would mean that the algorithm was not calibrated equally for both races, and that it would have to make different evaluations for both white and black defendants. While to the untrained eye the number of false positives may display bias, setting the false positives in equilibrium would mean we would treat each race differently, thus directly contradicting inequality. 

However, if we consider each argument in a social context, we see that the argument behind the bias due to false positives is powerful, the higher rate of error in the algorithm affecting millions of offenders seeking parole that would have provided good to society if they were granted parole, but to alter the algorithm to evaluate people differently due to their racial background is the root of injustice. These same types of unbiased algorithms are what cause fewer female applications for STEM jobs, fewer minorities accepted for a high profile interviews,  and many more examples such as these. 

While artificial intelligence has provided us with technology with capabilities that we wouldn’t have even imagined a century ago, when we tackle issues of social and moral justice, the age old testament stays true; fairness does not consist of logic alone, and while artificial intelligence has a while to grow before being able to provide empathy and have an awareness of the inherent inequalities in society, it is best to leave some of these life changing decisions to humans. 

REFERENCES -

Outnumbered by David Sumpter

Cover photo - https://www.nytimes.com/2019/11/19/technology/artificial-intelligence-bias.html