It's aiming to get feedback and encourage international consensus building for what it dubs "human-centric AI" -- targeting among other talking shops the forthcoming G7 and G20 meetings for increasing discussion on the topic.
The Commission's High-Level Group on AI -- a body comprised of 52 experts from across industry, academia and civic society announced last summer -- published their draft ethics guidelines for trustworthy AI in December.
A revised version of the document was submitted to the Commission in March. It's boiled the expert consultancy down to a set of seven "key requirements" for trustworthy AI, i.e. in addition to machine learning technologies needing to respect existing laws and regulations:
- Human agency and oversight: "AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy."
- Robustness and safety: "Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems."
- Privacy and data governance: "Citizens should have full control over their data, while data concerning them will not be used to harm or discriminate against them."
- Transparency: "The traceability of AI systems should be ensured."
- Diversity, non-discrimination and fairness: "AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility."
- Societal and environmental well-being: "AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility."
- Accountability: "Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes."