Author ORCID iD

https://orcid.org/0000-0002-5718-3982

Document Type

Article

Publication Date

2021

Abstract

Artificial intelligence (AI), especially its subset machine learning, has tremendous potential to improve health care. However, health AI also raises new regulatory challenges. In this Article, I argue that there is a need for a new regulatory framework for AI-based medical devices in the U.S. that ensures that such devices are reasonably safe and effective when placed on the market and will remain so throughout their life cycle. I advocate for U.S. Food and Drug Administration (FDA) and congressional actions. I focus on how the FDA could - with additional statutory authority - regulate AI-based medical devices. I show that the FDA incompletely regulates health AI-based products, which may jeopardize patient safety and undermine public trust. For example, the medical device definition is too narrow, and several risky health AI-based products are not subject to FDA regulation. Moreover, I show that most AI-based medical devices available on the U.S. market are 510(k)-cleared. However, the 510(k) pathway raises significant safety and effectiveness concerns. I thus propose a future regulatory framework for premarket review of medical devices, including AI-based ones. Further, I discuss two problems that are related to specific AI-based medical devices, namely opaque (“black-box”) algorithms and adaptive algorithms that can continuously learn, and I make suggestions on how to address them. Finally, I encourage the FDA to broaden its view and consider AI-based medical devices as systems, not just devices, and focus more on the environment in which they are deployed.

Publication Title

Yale Journal of Health Policy, Law, and Ethics

Share

COinS