Must Health AI be Explainable if it is Reliable?
Fra Hanne Høy Kejser
views
Fra Hanne Høy Kejser
Sune Hannibal Holm, Assoc.Prof. Bioethics and Governance, Univ. Copenhagen
Impressively accurate machine learning are being developed for clinical decision-support. A widespread concern is that the output of these algorithms e.g. diagnostic classifications, treatment suggestions, and risk scores cannot be explained to the relevant users e.g., doctors and patients.
In this talk I discuss why explanations should be required if the algorithm has been tested to be reliable. I relate the question to norms of shared decision-making in medical practice and the use of drugs which are approved for use despite a lack of understanding of the underlying mechanism by which they work.
Career
History
Sune Holm is associate professor in philosophy, Ph.D. (University of St.
Andrews). Sune's current research focuses on questions concerning the ethics of
AI, philosophy of biology, and bioethics.
He participates in several national and international research projects on the use of AI in healthcare, and he is co-director of the Trustworthy AI Lab hosted by the Dept. of Data science as well as a member of the scientific committee of the European Workshop on Algorithmic Fairness (EWAF). From 2016-2020 Sune was PI of the DFF2-project Living Machines? which examined philosophical issues relating to the machine-organism analogy.
This is from the conference "Artificial Intelligence in Healthcare".
Join the discussion about the future of healthcare at our conference, where AI's power to transform patient care through swift, accurate data processing and task automation takes center stage.