Europe's latest AI policy
There is no doubt that Europe houses some of the world’s leading AI research (more on this in my articles on AI in France and the UK). However recent events, such as the use of AI in the military, biased algorithms, and self-driving car fatalities, have led to a discussion on how the new industry should be regulated. The EU’s latest attempt at establishing guidelines was 3 months ago, where“Ethics Guidelines for Trustworthy AI“was published. The guidelines ensure AI developers consider the ethics of potential AI products.
The guidelines are divided into 7 “requirements“, those being:
Human agency and oversight
Including fundamental rights, human agency and human oversight
Technical robustness and safety
Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility
Privacy and data governance
Including respect for privacy, quality and integrity of data, and access to data
Including traceability, explainability and communication
Diversity, non-discrimination and fairness
Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation
Societal and environmental wellbeing
Including sustainability and environmental friendliness, social impact, society and democracy
Including audit-ability, minimisation and reporting of negative impact, trade-offs and redress
The guidelines go into detail for each requirement and how it relates to the ethical and sustainable use of AI as the industry evolves. One policy that may be met with backlash, however, is transparency. Due to the secretive nature of competing firms, some intellectual property related to AI systems may not always be openly available. The guidelines partly solve this problem by implying some types of AI solutions should be open to audit.
Unlike the US and Asian countries, Europe has also taken a different stance on the issue of data and its use. Instead of copyright exceptions for the act of normalising data protected by copyright, the EU has an opt-out mechanism. The relevance of this to AI is that its growth is dependent on the availability of data. Developers will have to be aware of intellectual property rights that may protect the data that they are collecting to train AI.
Although the industry is relatively young the future of regulations around it remains uncertain. As governments being to see the power, both for good and for bad, of AI, we can expect to see new government organisations being set up, new laws being passed, as well as amendments to old ones.
In the short term, however, all participators are invited to test the Guidelines’ assessment list for Trustworthy AI and provide useful feedback on potential improvements. This testing will finish on the 1st of December 2019 and, after consideration of all advice, the Commission will decide on the next steps in the first months of 2020.