Algorithms on regulatory lockdown in medicine

Boris Babic, INSEAD
Sara Gerke, Penn State Dickinson Law
Theodoros Evgeniou, INSEAD
I. Cohen, Harvard University


As use of artificial intelligence and machine learning (AI/ML) in medicine continues to grow, regulators face a fundamental problem: After evaluating a medical AI/ML technology and deeming it safe and effective, should the regulator limit its authorization to market only the version of the algorithm that was submitted, or permit marketing of an algorithm that can learn and adapt to new conditions? For drugs and ordinary medical devices, this problem typically does not arise. But it is this capability to continuously evolve that underlies much of the potential benefit of AI/ML. We address this “update problem” and the treatment of “locked” versus “adaptive” algorithms by building on two proposals suggested earlier this year by one prominent regulatory body, the U.S. Food and Drug Administration (FDA) (1, 2), which may play an influential role in how other countries shape their associated regulatory architecture. The emphasis of regulators needs to be on whether AI/ML is overall reliable as applied to new data and on treating similar patients similarly. We describe several features that are specific to and ubiquitous in AI/ML systems and are closely tied to their reliability. To manage the risks associated with these features, regulators should focus particularly on continuous monitoring and risk assessment, and less on articulating ex-ante plans for future algorithm changes.