From principles to ethical standards under the EU Artificial Intelligence Act by Brent Mittelstadt
Fra Hanne Høy Kejser
views
Fra Hanne Høy Kejser
Short abstract
Under the proposal for an Artificial Intelligence Act (‘AIA’), the European Union seeks to develop harmonised standards for AI governance, development, and deployment that address abstract normative concepts such transparency, fairness, and accountability. Applying such concepts inevitably requires answering “hard normative questions”, i.e., endorsing specific interpretations or theoretical approaches or specifying acceptable or preferred trade-offs between competing interests.
This talk discusses these normative challenges and proposes three possible pathways for future standardisation under the AIA. The position of current standardisation efforts will be discussed within the broader history of AI Ethics and regulation. Using fairness as an example, the talk concludes by reflecting on the real-world harms that can follow from enforcing certain approaches to key normative concepts over others.
Bio
Professor Brent Mittelstadt is an Associate Professor and Director of Research at the Oxford Internet Institute, University of Oxford. He leads the Governance of Emerging Technologies (GET) research programme which works across ethics, law, and emerging information technologies.
He is a data ethicist and philosopher specializing in AI ethics, algorithmic fairness and explainability, and technology law and policy, and the author of foundational works addressing the ethics of algorithms, AI, and Big Data; fairness, accountability, and transparency in machine learning; data protection and non-discrimination law; group privacy; ethical auditing of automated systems; and digital epidemiology and public health ethics.