Impact of train/test sample regimen on performance estimate stability of machine learning in cardiovascular imaging

Abstract As machine learning research in the field of cardiovascular imaging continues to grow, obtaining reliable model performance estimates is critical to develop reliable baselines and compare different algorithms. While the machine learning community has generally accepted methods such as k-fol...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Vikash Singh, Michael Pencina, Andrew J. Einstein, Joanna X. Liang, Daniel S. Berman, Piotr Slomka
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/3fbe52b623e04aa399b5b0f5b1e1ff34
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:3fbe52b623e04aa399b5b0f5b1e1ff34
record_format dspace
spelling oai:doaj.org-article:3fbe52b623e04aa399b5b0f5b1e1ff342021-12-02T18:31:29ZImpact of train/test sample regimen on performance estimate stability of machine learning in cardiovascular imaging10.1038/s41598-021-93651-52045-2322https://doaj.org/article/3fbe52b623e04aa399b5b0f5b1e1ff342021-07-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-93651-5https://doaj.org/toc/2045-2322Abstract As machine learning research in the field of cardiovascular imaging continues to grow, obtaining reliable model performance estimates is critical to develop reliable baselines and compare different algorithms. While the machine learning community has generally accepted methods such as k-fold stratified cross-validation (CV) to be more rigorous than single split validation, the standard research practice in medical fields is the use of single split validation techniques. This is especially concerning given the relatively small sample sizes of datasets used for cardiovascular imaging. We aim to examine how train-test split variation impacts the stability of machine learning (ML) model performance estimates in several validation techniques on two real-world cardiovascular imaging datasets: stratified split-sample validation (70/30 and 50/50 train-test splits), tenfold stratified CV, 10 × repeated tenfold stratified CV, bootstrapping (500 × repeated), and leave one out (LOO) validation. We demonstrate that split validation methods lead to the highest range in AUC and statistically significant differences in ROC curves, unlike the other aforementioned approaches. When building predictive models on relatively small data sets as is often the case in medical imaging, split-sample validation techniques can produce instability in performance estimates with variations in range over 0.15 in the AUC values, and thus any of the alternate validation methods are recommended.Vikash SinghMichael PencinaAndrew J. EinsteinJoanna X. LiangDaniel S. BermanPiotr SlomkaNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-8 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Vikash Singh
Michael Pencina
Andrew J. Einstein
Joanna X. Liang
Daniel S. Berman
Piotr Slomka
Impact of train/test sample regimen on performance estimate stability of machine learning in cardiovascular imaging
description Abstract As machine learning research in the field of cardiovascular imaging continues to grow, obtaining reliable model performance estimates is critical to develop reliable baselines and compare different algorithms. While the machine learning community has generally accepted methods such as k-fold stratified cross-validation (CV) to be more rigorous than single split validation, the standard research practice in medical fields is the use of single split validation techniques. This is especially concerning given the relatively small sample sizes of datasets used for cardiovascular imaging. We aim to examine how train-test split variation impacts the stability of machine learning (ML) model performance estimates in several validation techniques on two real-world cardiovascular imaging datasets: stratified split-sample validation (70/30 and 50/50 train-test splits), tenfold stratified CV, 10 × repeated tenfold stratified CV, bootstrapping (500 × repeated), and leave one out (LOO) validation. We demonstrate that split validation methods lead to the highest range in AUC and statistically significant differences in ROC curves, unlike the other aforementioned approaches. When building predictive models on relatively small data sets as is often the case in medical imaging, split-sample validation techniques can produce instability in performance estimates with variations in range over 0.15 in the AUC values, and thus any of the alternate validation methods are recommended.
format article
author Vikash Singh
Michael Pencina
Andrew J. Einstein
Joanna X. Liang
Daniel S. Berman
Piotr Slomka
author_facet Vikash Singh
Michael Pencina
Andrew J. Einstein
Joanna X. Liang
Daniel S. Berman
Piotr Slomka
author_sort Vikash Singh
title Impact of train/test sample regimen on performance estimate stability of machine learning in cardiovascular imaging
title_short Impact of train/test sample regimen on performance estimate stability of machine learning in cardiovascular imaging
title_full Impact of train/test sample regimen on performance estimate stability of machine learning in cardiovascular imaging
title_fullStr Impact of train/test sample regimen on performance estimate stability of machine learning in cardiovascular imaging
title_full_unstemmed Impact of train/test sample regimen on performance estimate stability of machine learning in cardiovascular imaging
title_sort impact of train/test sample regimen on performance estimate stability of machine learning in cardiovascular imaging
publisher Nature Portfolio
publishDate 2021
url https://doaj.org/article/3fbe52b623e04aa399b5b0f5b1e1ff34
work_keys_str_mv AT vikashsingh impactoftraintestsampleregimenonperformanceestimatestabilityofmachinelearningincardiovascularimaging
AT michaelpencina impactoftraintestsampleregimenonperformanceestimatestabilityofmachinelearningincardiovascularimaging
AT andrewjeinstein impactoftraintestsampleregimenonperformanceestimatestabilityofmachinelearningincardiovascularimaging
AT joannaxliang impactoftraintestsampleregimenonperformanceestimatestabilityofmachinelearningincardiovascularimaging
AT danielsberman impactoftraintestsampleregimenonperformanceestimatestabilityofmachinelearningincardiovascularimaging
AT piotrslomka impactoftraintestsampleregimenonperformanceestimatestabilityofmachinelearningincardiovascularimaging
_version_ 1718377971546324992