On April 8, 2019, the European Commission (Commission) announced the launch of a pilot project to test draft ethical rules for developing and applying artificial intelligence (AI) technologies.
In order to ensure that AI ethical rules can be successfully implemented in practice, the Commission is taking a three-step approach by: 1) setting out the key requirements for trustworthy AI; 2) launching a large scale pilot phase for feedback from stakeholders, and 3) developing international consensus for human-centric AI.
The draft ethical rules provide the following seven essential guidelines for achieving a trustworthy AI:
- Human agency and oversight: "AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy."
- Robustness and safety: “Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.”
- Privacy and data governance: “Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.”
- Transparency: “The traceability of AI systems should be ensured.”
- Diversity, non-discrimination and fairness: “AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.”
- Societal and environmental well-being: “AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.”
- Accountability: “Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.”
The Commission will launch a pilot phase involving a wide range of stakeholders. Following this pilot phase, the Commission will see how the draft guidelines operate in a large-scale pilot program with a wide range of stakeholders, including international organizations and companies outside of the EU.
Summary By: Jae Morris