General Principles vs. Procedural Values: Two Strategies of Socio-ethical AI Regulation by Rune Nyrup
Fra Hanne Høy Kejser
views
Fra Hanne Høy Kejser
Short abstract
Within many recent regulatory initiatives for AI, principles take centre stage. Countless policy reports and strategy documents have been published that endorse various lists of ethical AI principles, and much of the research literature has focused on how such principles can be “put into practice”.
This strategy borrows heavily from clinical medical ethics, where the “Principlist” approach has long been dominant. In this talk, I highlight an alternative “Proceduralist” approach, inspired instead by public health ethics. The focus in this approach is on designing deliberative procedures through which solutions to ethically contentious issues can be reached, which relevant stakeholders can accept as legitimate compromises despite their disagreements.
I will outline how this approach can be used as a strategy of socio-ethical regulation for AI and highlight some of its benefits.
Bio
Rune Nyrup is Associate Professor at the Centre for Science Studies, Aarhus University. His research seeks to explicate the philosophical underpinnings of contemporary debates surrounding the ethics and epistemology of science and technology, focusing in particular on artificial intelligence. Beforehand joining Aarhus University, he worked for several years as a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge.