Second opinion needed: communicating uncertainty in medical machine learning
Abstract There is great excitement that medical artificial intelligence (AI) based on machine learning (ML) can be used to improve decision making at the patient level in a variety of healthcare settings. However, the quantification and communication of uncertainty for individual predictions is ofte...
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/d4f85d7dd17c413e9a36056423ac1ba9 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:d4f85d7dd17c413e9a36056423ac1ba9 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:d4f85d7dd17c413e9a36056423ac1ba92021-12-02T13:35:39ZSecond opinion needed: communicating uncertainty in medical machine learning10.1038/s41746-020-00367-32398-6352https://doaj.org/article/d4f85d7dd17c413e9a36056423ac1ba92021-01-01T00:00:00Zhttps://doi.org/10.1038/s41746-020-00367-3https://doaj.org/toc/2398-6352Abstract There is great excitement that medical artificial intelligence (AI) based on machine learning (ML) can be used to improve decision making at the patient level in a variety of healthcare settings. However, the quantification and communication of uncertainty for individual predictions is often neglected even though uncertainty estimates could lead to more principled decision-making and enable machine learning models to automatically or semi-automatically abstain on samples for which there is high uncertainty. In this article, we provide an overview of different approaches to uncertainty quantification and abstention for machine learning and highlight how these techniques could improve the safety and reliability of current ML systems being used in healthcare settings. Effective quantification and communication of uncertainty could help to engender trust with healthcare workers, while providing safeguards against known failure modes of current machine learning approaches. As machine learning becomes further integrated into healthcare environments, the ability to say “I’m not sure” or “I don’t know” when uncertain is a necessary capability to enable safe clinical deployment.Benjamin KompaJasper SnoekAndrew L. BeamNature PortfolioarticleComputer applications to medicine. Medical informaticsR858-859.7ENnpj Digital Medicine, Vol 4, Iss 1, Pp 1-6 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Computer applications to medicine. Medical informatics R858-859.7 |
spellingShingle |
Computer applications to medicine. Medical informatics R858-859.7 Benjamin Kompa Jasper Snoek Andrew L. Beam Second opinion needed: communicating uncertainty in medical machine learning |
description |
Abstract There is great excitement that medical artificial intelligence (AI) based on machine learning (ML) can be used to improve decision making at the patient level in a variety of healthcare settings. However, the quantification and communication of uncertainty for individual predictions is often neglected even though uncertainty estimates could lead to more principled decision-making and enable machine learning models to automatically or semi-automatically abstain on samples for which there is high uncertainty. In this article, we provide an overview of different approaches to uncertainty quantification and abstention for machine learning and highlight how these techniques could improve the safety and reliability of current ML systems being used in healthcare settings. Effective quantification and communication of uncertainty could help to engender trust with healthcare workers, while providing safeguards against known failure modes of current machine learning approaches. As machine learning becomes further integrated into healthcare environments, the ability to say “I’m not sure” or “I don’t know” when uncertain is a necessary capability to enable safe clinical deployment. |
format |
article |
author |
Benjamin Kompa Jasper Snoek Andrew L. Beam |
author_facet |
Benjamin Kompa Jasper Snoek Andrew L. Beam |
author_sort |
Benjamin Kompa |
title |
Second opinion needed: communicating uncertainty in medical machine learning |
title_short |
Second opinion needed: communicating uncertainty in medical machine learning |
title_full |
Second opinion needed: communicating uncertainty in medical machine learning |
title_fullStr |
Second opinion needed: communicating uncertainty in medical machine learning |
title_full_unstemmed |
Second opinion needed: communicating uncertainty in medical machine learning |
title_sort |
second opinion needed: communicating uncertainty in medical machine learning |
publisher |
Nature Portfolio |
publishDate |
2021 |
url |
https://doaj.org/article/d4f85d7dd17c413e9a36056423ac1ba9 |
work_keys_str_mv |
AT benjaminkompa secondopinionneededcommunicatinguncertaintyinmedicalmachinelearning AT jaspersnoek secondopinionneededcommunicatinguncertaintyinmedicalmachinelearning AT andrewlbeam secondopinionneededcommunicatinguncertaintyinmedicalmachinelearning |
_version_ |
1718392703592431616 |