Measuring the bias of incorrect application of feature selection when using cross-validation in radiomics

Abstract Background Many studies in radiomics are using feature selection methods to identify the most predictive features. At the same time, they employ cross-validation to estimate the performance of the developed models. However, if the feature selection is performed before the cross-validation,...

Description complète

Enregistré dans:
Détails bibliographiques
Auteur principal: Aydin Demircioğlu
Format: article
Langue:EN
Publié: SpringerOpen 2021
Sujets:
Accès en ligne:https://doaj.org/article/e05fc95b049c4fdc8f28b17d1566ac18
Tags: Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
Description
Résumé:Abstract Background Many studies in radiomics are using feature selection methods to identify the most predictive features. At the same time, they employ cross-validation to estimate the performance of the developed models. However, if the feature selection is performed before the cross-validation, data leakage can occur, and the results can be biased. To measure the extent of this bias, we collected ten publicly available radiomics datasets and conducted two experiments. First, the models were developed by incorrectly applying the feature selection prior to cross-validation. Then, the same experiment was conducted by applying feature selection correctly within cross-validation to each fold. The resulting models were then evaluated against each other in terms of AUC-ROC, AUC-F1, and Accuracy. Results Applying the feature selection incorrectly prior to the cross-validation showed a bias of up to 0.15 in AUC-ROC, 0.29 in AUC-F1, and 0.17 in Accuracy. Conclusions Incorrect application of feature selection and cross-validation can lead to highly biased results for radiomic datasets.