What are the Risks of AI?

Nowadays, many companies are using artificial intelligence in helping their business to become more productive and profitable, but there are also certain limitations due to AI that we should keep in mind.

Hackers. One risk of AI is that it is everywhere meaning many people are able to use it to their advantage. Hackers can easily use AI to their advantage by machine learning poisoning in which they understand how machine learning works, try to spot a vulnerability and confuse the underlying models by changing the algorithm, whether it be small inputs chosen by the hacker or whether the attacks intercepts data through a service that trains CNN (Convolutional Neural Network), for example chat-bots. Chat bots are bots that learn, analyse and mimic people’s behaviour through an algorithm which allows it to understand a person more by receiving answers and analysing them. Chat bots can also be used for the intention in swaying people to reveal person accounts or financial information. In 2016, a chatbot presenting itself as a friend tricked 10,000 Facebook users into installing malware and after being compromised, the threat actor hijacked the victims’ Facebook account. Most people do not realise that AI-driven conversational bots such as Google Assistant and Amazon’s Alexa may be listening to your conversation as they are always in "listen mode." In combination with Internet of Things (IoT) technology, your conversations might not be as private as you think.

Hackers are just as sophisticated as the communities that develop the capability to defend themselves against hackers. They are using the same techniques, such as intelligent phishing, analysing the behaviour of potential targets to determine what type of attacks to use, ‘smart malware’ that knows when it is being watched so it can hide.
— President and CEO of enterprise security company SAL NS2, Mark Testoni

Increased use of artificial intelligence tools for preventing crime can also result in the cascading of external risks in a variety of ways, this could happen where there are false alerts that incorrectly recognise individuals as criminals. In creating AI robots, how people use their power and knowledge within AI means it could take a turn for the worse. In May 2017, a report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), was much more prone to mistakenly label black defendant as likely to reoffend - wrongly flagging them at almost twice the rate as white people (45% to 24%). The private company that supplies the software, NorthPointe, disputed the conclusions of the report but did not disclose any of the inner workings of the program. A worst case scenario of any case would be where people are more dependent on AI tools to help them catch criminals on their behalf.

If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate.
— Kristian Lum, lead statistician at the San Francisco based non-profit Human Rights Data Analysis Group (HRDAG)

AI has some limitations, but with clear communication with regulators and customers will allow companies to identify and efficiently solve problems that occur. AI will eventually have a hugely positive impact on reducing crime in the world and other works — as long as it is managed well.

Ethics, ResearchAmna Zaman