Google Announces New AI Ethics Panel

Google has launched a global advisory council to offer guidance on ethical issues relating to artificial intelligence, automation and related technologies.

The panel is made up of eight people, including the former US deputy secretary of state and a University of Bath associate professor.

The group will "consider some of Google's most complex challenges”, the company declared.

The panel was revealed at MIT Technology Review's EmTech Digital, a conference organised by the Massachusetts Institute of Technology.

Google has recently come under harsh criticism - internally and externally - over its plans on how to use emerging technologies.

For instance, in June 2018, the company said it would not renew a contract it had with the Pentagon to develop AI technology to control drones. Project Maven, as it was named, was unpopular among Google’s staff, prompting some resignations.

In response to the criticism, Google published a set of AI “principles” it said it would abide by. They include pledges to be "socially beneficial' and "accountable to people".

The Advanced Technology External Advisory Council (ATEAC) will meet up for the first time in April. Google’s head of global affairs, Kent Walker, said in a blog post that there would be three further meetings in 2019.

Google has published a full list of the panel’s members. They include leading mathematician Bubacarr Bah, former US deputy secretary of state William Joseph Burns, and Joanna Bryson, who teaches computer sciences at the University of Bath, UK.

It will debate recommendations about how to use technologies such as facial recognition. Last year, Google’s then-head of cloud computing, Diane Greene, described facial recognition tech as having "inherent bias” due to a lack of diverse data.

In a commonly-cited thesis entitled Robots Should Be Slaves, Ms Bryson argued against the trend of treating robots like people.

In humanising them, we not only further dehumanise real people, but also encourage poor human decision making in the allocation of resources and responsibility.
— Joanna Bryson

In 2018 she argued that complexity should not be used as an excuse to not properly inform the public of how AI systems operate.

When a system using AI causes damage, we need to know we can hold the human beings behind that system to account.
— Joanna Bryson
EthicsAnthony Shorrocks