Ethical Considerations Related to Using Machine Learning-Based Prediction of Mortality in the Pediatric Intensive Care Unit

Document Type


Publication Date



Machine learning shows promise for developing prediction models that could improve care in the pediatric intensive care unit (PICU). Advocates claim these systems enhance prognostic accuracy and can adapt to changing clinical practices by adding more and new large-scale child health data. Accurate predictive models using machine learning could benefit decision-making and care delivery and, in turn, outcomes for patients and families. Despite their potential, some of these models may replicate the biases of their training datasets or may be biased in other ways (eg, label bias or contextual bias), and are built without the capacity to explain how they reach decisions (so-called black boxes). Moreover, implicit trust or mistrust in technology may influence patients', families', and clinicians' views of software-generated opinions as more objective and valid than they really are. This essay provides an overview of the ethical concerns posed by the advent of machine learning-based models for mortality prediction in the PICU. We discuss the benefits and risks related to this emerging technology, including considerations of technical questions, care delivery, family experience and decision-making, and clinician-family relationships, as well as legal and organizational issues.

Publication Title