Algorithms on Regulatory Lockdown in Medicine

Author ORCID iD

https://orcid.org/0000-0002-5718-3982

Document Type

Article

Publication Date

12-6-2019

Abstract

As use of artificial intelligence and machine learning (AI/ML) in medicine continues to grow, regulators face a fundamental problem: After evaluating a medical AI/ML technology and deeming it safe and effective, should the regulator limit its authorization to market only the version of the algorithm that was submitted, or permit marketing of an algorithm that can learn and adapt to new conditions? For drugs and ordinary medical devices, this problem typically does not arise. But it is this capability to continuously evolve that underlies much of the potential benefit of AI/ML. We address this “update problem” and the treatment of “locked” versus “adaptive” algorithms by building on two proposals suggested earlier this year by one prominent regulatory body, the U.S. Food and Drug Administration (FDA) (1, 2), which may play an influential role in how other countries shape their associated regulatory architecture. The emphasis of regulators needs to be on whether AI/ML is overall reliable as applied to new data and on treating similar patients similarly. We describe several features that are specific to and ubiquitous in AI/ML systems and are closely tied to their reliability. To manage the risks associated with these features, regulators should focus particularly on continuous monitoring and risk assessment, and less on articulating ex-ante plans for future algorithm changes.

Publication Title

Science

Share

COinS