How much do you trust in AI?

Machine learning is simply the study of statistical techniques used to convert a large amount of confusing input into useful output. Despite being nothing more than just a bit of mathematics, machine learning enables us to make systems with complex decision-making abilities. While a system operates, the numerical values of many features of the statistical model can be adjusted to maximise the "correctness" of the system in a process called training or learning. However, in this special period of learning, the system can develop inaccurate and dangerous biases.

A lot of distrust in artificial intelligence has been generated due to the general public finding out that it is extremely easy for machine learning models to create them. This combined with AI being put in more and more important roles where there are many ramifications of bad decisions has led to an increasing amount of anxiety.

A key example of this is the recent case of Amazon using an artificial intelligence model to hire and the model made sweeping generalisations which lead to women being unfairly less likely to be hired. They generally occur because of shortcomings in the system's learning process and because the relationships created are not immediately clear to the developer. A lack of transparency leads to these issues not being brought up and ultimately not being fixed by developers. These concerns aren't just for recruiting however, they stretch out to encompass nearly everything, even Tesla's self-driving cars, and Google's increasingly complex search algorithm. There have been recent legal movements that try and make the machine learning models more transparent in both the EU and US but there has been no progress so far.

For artificial intelligence to grow, it must gain the trust of the public, this means that these movements to make artificial intelligence more transparent are critical in order for artificial intelligence to be picked up by firms.

Trust requires two parties knowing each others intentions. Do you know the intentions of the company who’s products you use?

Trust requires two parties knowing each others intentions. Do you know the intentions of the company who’s products you use?

Due to this, in April, Facebook made moves towards demystifying feeds generated for its users (however, it is likely this is a publicity stunt as Facebook soon afterward, blocked tools that give the user transparency over the adverts they see).

But ML model transparency is a tough problem and an active area of research. The data used to analyze the relationships that create the framework or model is the first part of the problem. Availability of data, selection of data and coverage of data all introduce the possibility of bias.

However, transparency of models is tougher than you might think. In a complex model with thousands of connections, it is hard to easily display that information, which is why it is currently being researched by many capable computer scientists. Once a good technique is discovered, there may be a revitalisation in the trust towards AI

.

Parth Mahendra