Identifying unreliable predictions in clinical risk models
Abstract The ability to identify patients who are likely to have an adverse outcome is an essential component of good clinical care. Therefore, predictive risk stratification models play an important role in clinical decision making. Determining whether a given predictive model is suitable for clini...
Guardado en:
Autores principales: | , , , , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2020
|
Materias: | |
Acceso en línea: | https://doaj.org/article/2fc1775645bb49caa1fa9a615f60562f |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:2fc1775645bb49caa1fa9a615f60562f |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:2fc1775645bb49caa1fa9a615f60562f2021-12-02T11:02:39ZIdentifying unreliable predictions in clinical risk models10.1038/s41746-019-0209-72398-6352https://doaj.org/article/2fc1775645bb49caa1fa9a615f60562f2020-01-01T00:00:00Zhttps://doi.org/10.1038/s41746-019-0209-7https://doaj.org/toc/2398-6352Abstract The ability to identify patients who are likely to have an adverse outcome is an essential component of good clinical care. Therefore, predictive risk stratification models play an important role in clinical decision making. Determining whether a given predictive model is suitable for clinical use usually involves evaluating the model’s performance on large patient datasets using standard statistical measures of success (e.g., accuracy, discriminatory ability). However, as these metrics correspond to averages over patients who have a range of different characteristics, it is difficult to discern whether an individual prediction on a given patient should be trusted using these measures alone. In this paper, we introduce a new method for identifying patient subgroups where a predictive model is expected to be poor, thereby highlighting when a given prediction is misleading and should not be trusted. The resulting “unreliability score” can be computed for any clinical risk model and is suitable in the setting of large class imbalance, a situation often encountered in healthcare settings. Using data from more than 40,000 patients in the Global Registry of Acute Coronary Events (GRACE), we demonstrate that patients with high unreliability scores form a subgroup in which the predictive model has both decreased accuracy and decreased discriminatory ability.Paul D. MyersKenney NgKristen SeversonUri KartounWangzhi DaiWei HuangFrederick A. AndersonCollin M. StultzNature PortfolioarticleComputer applications to medicine. Medical informaticsR858-859.7ENnpj Digital Medicine, Vol 3, Iss 1, Pp 1-8 (2020) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Computer applications to medicine. Medical informatics R858-859.7 |
spellingShingle |
Computer applications to medicine. Medical informatics R858-859.7 Paul D. Myers Kenney Ng Kristen Severson Uri Kartoun Wangzhi Dai Wei Huang Frederick A. Anderson Collin M. Stultz Identifying unreliable predictions in clinical risk models |
description |
Abstract The ability to identify patients who are likely to have an adverse outcome is an essential component of good clinical care. Therefore, predictive risk stratification models play an important role in clinical decision making. Determining whether a given predictive model is suitable for clinical use usually involves evaluating the model’s performance on large patient datasets using standard statistical measures of success (e.g., accuracy, discriminatory ability). However, as these metrics correspond to averages over patients who have a range of different characteristics, it is difficult to discern whether an individual prediction on a given patient should be trusted using these measures alone. In this paper, we introduce a new method for identifying patient subgroups where a predictive model is expected to be poor, thereby highlighting when a given prediction is misleading and should not be trusted. The resulting “unreliability score” can be computed for any clinical risk model and is suitable in the setting of large class imbalance, a situation often encountered in healthcare settings. Using data from more than 40,000 patients in the Global Registry of Acute Coronary Events (GRACE), we demonstrate that patients with high unreliability scores form a subgroup in which the predictive model has both decreased accuracy and decreased discriminatory ability. |
format |
article |
author |
Paul D. Myers Kenney Ng Kristen Severson Uri Kartoun Wangzhi Dai Wei Huang Frederick A. Anderson Collin M. Stultz |
author_facet |
Paul D. Myers Kenney Ng Kristen Severson Uri Kartoun Wangzhi Dai Wei Huang Frederick A. Anderson Collin M. Stultz |
author_sort |
Paul D. Myers |
title |
Identifying unreliable predictions in clinical risk models |
title_short |
Identifying unreliable predictions in clinical risk models |
title_full |
Identifying unreliable predictions in clinical risk models |
title_fullStr |
Identifying unreliable predictions in clinical risk models |
title_full_unstemmed |
Identifying unreliable predictions in clinical risk models |
title_sort |
identifying unreliable predictions in clinical risk models |
publisher |
Nature Portfolio |
publishDate |
2020 |
url |
https://doaj.org/article/2fc1775645bb49caa1fa9a615f60562f |
work_keys_str_mv |
AT pauldmyers identifyingunreliablepredictionsinclinicalriskmodels AT kenneyng identifyingunreliablepredictionsinclinicalriskmodels AT kristenseverson identifyingunreliablepredictionsinclinicalriskmodels AT urikartoun identifyingunreliablepredictionsinclinicalriskmodels AT wangzhidai identifyingunreliablepredictionsinclinicalriskmodels AT weihuang identifyingunreliablepredictionsinclinicalriskmodels AT frederickaanderson identifyingunreliablepredictionsinclinicalriskmodels AT collinmstultz identifyingunreliablepredictionsinclinicalriskmodels |
_version_ |
1718396282029998080 |