DeepMind Co-Founder Under Fire for Mishandling Data

Mustafa Suleyman is a co-founder of DeepMind, which is an extremely famous artificial intelligence researching firm that is a subsidiary of Alphabet, who has been told to go on leave following controversy revolving around the projects he was leading.

Suleyman is in charge of the application wing of DeepMind, which aims to use the firm's research to create real-life solutions and apply them mainly in the field of health and energy. As a co-founder Suleyman represents the ethical and moral-based foundation of the firm. This is an important role to play since AI can be a dangerous tool if not used with moral ramifications considered before using it.

Mustafa Suleyman, co-founder of DeepMind

Mustafa Suleyman, co-founder of DeepMind

The company was founded by Suleyman along with Demis Hassabis, who is the CEO of the company, in 2010. The companies very existence seems to have caused an increase in the rush for better AI and putting artificial intelligence into different and varying solutions.

DeepMind has over time focused a lot on health-care research, an example of this is there recent announcement of a product that can scan the retina of patients for potential issues. The "Health" research wing is led by Suleyman, growing it rapidly to make it one of the main wings of DeepMind.

However, DeepMind and Suleyman, in particular, have come under criticism for the mishandling of data. It is alleged, according to UK's privacy watchdog, that the Royal Free Hospital, located in London, illegally sold DeepMind patient records. They were sold to be used in an AI solution for an app that lets doctors scan for kidney injuries. Despite Suleyman apologising for the mishandling, Google's team has said that they are creating a new team called Google Health and the DeepMind team has been put to rest. Additionally, Suleyman has been removed from running the team.

This is an important and well-timed reminder from Google that at the heart of all AI solutions lies an ethical and moral balance that mustn't be broken at any point of the development, be it the application of a solution or even the data gathered by the company to train the model.

GoogleParth Mahendra