Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities

Abstract The machine learning community has become alert to the ways that predictive algorithms can inadvertently introduce unfairness in decision-making. Herein, we discuss how concepts of algorithmic fairness might apply in healthcare, where predictive algorithms are being increasingly used to sup...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Jessica K. Paulus, David M. Kent
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2020
Materias:
Acceso en línea:https://doaj.org/article/e7bd8bf7145c42bc85f9bf7ab9b5b39e
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:e7bd8bf7145c42bc85f9bf7ab9b5b39e
record_format dspace
spelling oai:doaj.org-article:e7bd8bf7145c42bc85f9bf7ab9b5b39e2021-12-02T16:24:59ZPredictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities10.1038/s41746-020-0304-92398-6352https://doaj.org/article/e7bd8bf7145c42bc85f9bf7ab9b5b39e2020-07-01T00:00:00Zhttps://doi.org/10.1038/s41746-020-0304-9https://doaj.org/toc/2398-6352Abstract The machine learning community has become alert to the ways that predictive algorithms can inadvertently introduce unfairness in decision-making. Herein, we discuss how concepts of algorithmic fairness might apply in healthcare, where predictive algorithms are being increasingly used to support decision-making. Central to our discussion is the distinction between algorithmic fairness and algorithmic bias. Fairness concerns apply specifically when algorithms are used to support polar decisions (i.e., where one pole of prediction leads to decisions that are generally more desired than the other), such as when predictions are used to allocate scarce health care resources to a group of patients that could all benefit. We review different fairness criteria and demonstrate their mutual incompatibility. Even when models are used to balance benefits-harms to make optimal decisions for individuals (i.e., for non-polar decisions)–and fairness concerns are not germane–model, data or sampling issues can lead to biased predictions that support decisions that are differentially harmful/beneficial across groups. We review these potential sources of bias, and also discuss ways to diagnose and remedy algorithmic bias. We note that remedies for algorithmic fairness may be more problematic, since we lack agreed upon definitions of fairness. Finally, we propose a provisional framework for the evaluation of clinical prediction models offered for further elaboration and refinement. Given the proliferation of prediction models used to guide clinical decisions, developing consensus for how these concerns can be addressed should be prioritized.Jessica K. PaulusDavid M. KentNature PortfolioarticleComputer applications to medicine. Medical informaticsR858-859.7ENnpj Digital Medicine, Vol 3, Iss 1, Pp 1-8 (2020)
institution DOAJ
collection DOAJ
language EN
topic Computer applications to medicine. Medical informatics
R858-859.7
spellingShingle Computer applications to medicine. Medical informatics
R858-859.7
Jessica K. Paulus
David M. Kent
Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities
description Abstract The machine learning community has become alert to the ways that predictive algorithms can inadvertently introduce unfairness in decision-making. Herein, we discuss how concepts of algorithmic fairness might apply in healthcare, where predictive algorithms are being increasingly used to support decision-making. Central to our discussion is the distinction between algorithmic fairness and algorithmic bias. Fairness concerns apply specifically when algorithms are used to support polar decisions (i.e., where one pole of prediction leads to decisions that are generally more desired than the other), such as when predictions are used to allocate scarce health care resources to a group of patients that could all benefit. We review different fairness criteria and demonstrate their mutual incompatibility. Even when models are used to balance benefits-harms to make optimal decisions for individuals (i.e., for non-polar decisions)–and fairness concerns are not germane–model, data or sampling issues can lead to biased predictions that support decisions that are differentially harmful/beneficial across groups. We review these potential sources of bias, and also discuss ways to diagnose and remedy algorithmic bias. We note that remedies for algorithmic fairness may be more problematic, since we lack agreed upon definitions of fairness. Finally, we propose a provisional framework for the evaluation of clinical prediction models offered for further elaboration and refinement. Given the proliferation of prediction models used to guide clinical decisions, developing consensus for how these concerns can be addressed should be prioritized.
format article
author Jessica K. Paulus
David M. Kent
author_facet Jessica K. Paulus
David M. Kent
author_sort Jessica K. Paulus
title Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities
title_short Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities
title_full Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities
title_fullStr Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities
title_full_unstemmed Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities
title_sort predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities
publisher Nature Portfolio
publishDate 2020
url https://doaj.org/article/e7bd8bf7145c42bc85f9bf7ab9b5b39e
work_keys_str_mv AT jessicakpaulus predictablyunequalunderstandingandaddressingconcernsthatalgorithmicclinicalpredictionmayincreasehealthdisparities
AT davidmkent predictablyunequalunderstandingandaddressingconcernsthatalgorithmicclinicalpredictionmayincreasehealthdisparities
_version_ 1718384087923687424