The Proposed EU Directives for AI Liability Leave Worrying Gaps Likely to Impact Medical AI

Mindy Duffourc
Sara Gerke, Penn State Dickinson Law

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit


Two newly proposed Directives impact liability for artificial intelligence in the EU: a Product Liability Directive (PLD) and an AI Liability Directive (AILD). While these proposed Directives provide some uniform liability rules for AI-caused harm, they fail to fully accomplish the EU’s goal of providing clarity and uniformity for liability for injuries caused by AI-driven goods and services. Instead, the Directives leave potential liability gaps for injuries caused by some black-box medical AI systems, which use opaque and complex reasoning to provide medical decisions and/or recommendations. Patients may not be able to successfully sue manufacturers or healthcare providers for some injuries caused by these black-box medical AI systems under either EU Member States’ strict or fault-based liability laws. Since the proposed Directives fail to address these potential liability gaps, manufacturers and healthcare providers may have difficulty predicting liability risks associated with creating and/or using some potentially beneficial black-box medical AI systems.