Prevention of Bias and Discrimination in Clinical Practice Algorithms

Document Type


Publication Date



The Department of Health and Human Services (DHHS) recently announced its intention to combat the use of biased algorithms in health care decision-making and telehealth services.1 It is a fact that many clinical algorithms are flawed, either because they incorporate bias by design or because they are trained on biased data sets, and it is important to combat discrimination by algorithm. Some examples of discrimination by algorithm include a machine learning tool to diagnose Alzheimer disease that misdiagnoses patients who speak with certain accents; an algorithm designed to distinguish between malignant and benign moles that was trained mostly on light-skinned patients; and a scheduling tool whose algorithm encouraged double booking for lower-income patients because they are more likely to be a “no-show.”

Publication Title