Bridging Justice and Meaningful Human Control in Medical AI Symposium, IACAP/AISB-25
Symposium Date & Location
๐
July 1-3
๐ University of Twente, Enschede, NL
Invited Speakers
- Lily Frank (TU Eindhoven)
- Patrik Hummel (TU Eindhoven)
Important Dates
๐ Abstract Submission Deadline: 28.02.2025
๐ Notification of Acceptance: 28.03.2025
๐ Camera-ready Copy for Proceedings: 28.04.2025
Submission link: https://easychair.org/my/conference?conf=jmhc2025
Overview
The deployment of AI systems in healthcare (in diagnosis, treatment recommendations, risk assessments, surgery and other domains) raises questions about their responsible use. Due to their self-learning properties and epistemic limitations (e.g., opacity), AI may create so-called responsibility gaps (Matthias, 2004). Developed as a way to counter the emergence of these gaps (Mecacci et al. 2024), meaningful human control requires AI systems to be responsive to the reasons of relevant agents and that agents remain ultimately responsible for the system behavior. But what if unjust power dynamics prevent possibly relevant agents from rendering their reasons accessible to the AI system? How can vulnerable social groups โ but also medical professionals - maintain their epistemic and moral agency under conditions of (epistemic) injustice (see, e.g., Pozzi, 2023)? In other words, to what extent does meaningful human control require (epistemic) justice? This also connects to the debate on patientsโ empowerment, that is the question to what extent can AI give patients more or less power over their health and well-being. This symposium aims to bring together scholars working on justice, responsibility, Meaningful Human Control, and empowerment in AI to explore fruitful points of intersection between these domains, with a specific focus on medical technologies. The topic urges us to bring together scholars working on the ethics of AI, the epistemology of AI, law, medical ethics, the design-engineering perspective, and their crossovers.
Questions of Interest
- Which epistemically unjust (structural) mechanisms can prevent relevant agents from offering their reasons to an AI-mediated healthcare system?
- Are participatory approaches to technological development enough to counter these injustices and the lack of control ensuing from them? Which other political concepts or measures are needed to make sense of- and address issues of power, justice, or control in medical AI?
- To what extent is it desirable to give patients more control and responsibility for their health? And to what extent would this responsibility empower them as opposed to unjustly burden (some of) them? How are empowerment and (epistemic) justice related in AI-mediated healthcare?
- What can we learn from the analysis of specific case studies or applications? To what extent can different kinds of AI applications improve or worsen human control, justice or patientsโ empowerment in different healthcare domains?
- Are there any novel challenges or opportunities raised by GenAI/LLMS for human control, responsibility, justice, empowerment in healthcare?
- Which lessons about control/justice/empowerment in healthcare can be learnt from the introduction of AI in other high-stake domains?
Submission Guidelines and Format
Submissions must be abstracts and should be sent via https://easychair.org/my/conference?conf=jmhc2025 We request that abstracts range between 500 and 1000 words. Each abstract will receive at least two reviews. Selected abstracts will be published in the general proceedings of the IACAP / AISB Conference, with the proviso that at least ONE author attends the symposium, in person, to present the paper and participate in general symposium activities.
Symposium Organisers
Giorgia Pozzi, TU Delft, The Netherlands, g.pozzi@tudelft.nl
Filippo Santoni de Sio, TU Eindhoven, The Netherlands, f.santoni.de.sio@tue.nl