Generalizability of deep learning models for dental image analysis
Abstract We assessed the generalizability of deep learning models and how to improve it. Our exemplary use-case was the detection of apical lesions on panoramic radiographs. We employed two datasets of panoramic radiographs from two centers, one in Germany (Charité, Berlin, n = 650) and one in India...
Guardado en:
Autores principales: | , , , , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/ef0ba152d5814f2e9b3dca9c1db648df |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:ef0ba152d5814f2e9b3dca9c1db648df |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:ef0ba152d5814f2e9b3dca9c1db648df2021-12-02T11:39:38ZGeneralizability of deep learning models for dental image analysis10.1038/s41598-021-85454-52045-2322https://doaj.org/article/ef0ba152d5814f2e9b3dca9c1db648df2021-03-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-85454-5https://doaj.org/toc/2045-2322Abstract We assessed the generalizability of deep learning models and how to improve it. Our exemplary use-case was the detection of apical lesions on panoramic radiographs. We employed two datasets of panoramic radiographs from two centers, one in Germany (Charité, Berlin, n = 650) and one in India (KGMU, Lucknow, n = 650): First, U-Net type models were trained on images from Charité (n = 500) and assessed on test sets from Charité and KGMU (each n = 150). Second, the relevance of image characteristics was explored using pixel-value transformations, aligning the image characteristics in the datasets. Third, cross-center training effects on generalizability were evaluated by stepwise replacing Charite with KGMU images. Last, we assessed the impact of the dental status (presence of root-canal fillings or restorations). Models trained only on Charité images showed a (mean ± SD) F1-score of 54.1 ± 0.8% on Charité and 32.7 ± 0.8% on KGMU data (p < 0.001/t-test). Alignment of image data characteristics between the centers did not improve generalizability. However, by gradually increasing the fraction of KGMU images in the training set (from 0 to 100%) the F1-score on KGMU images improved (46.1 ± 0.9%) at a moderate decrease on Charité images (50.9 ± 0.9%, p < 0.01). Model performance was good on KGMU images showing root-canal fillings and/or restorations, but much lower on KGMU images without root-canal fillings and/or restorations. Our deep learning models were not generalizable across centers. Cross-center training improved generalizability. Noteworthy, the dental status, but not image characteristics were relevant. Understanding the reasons behind limits in generalizability helps to mitigate generalizability problems.Joachim KroisAnselmo Garcia CantuAkhilanand ChaurasiaRanjitkumar PatilPrabhat Kumar ChaudhariRobert GaudinSascha GehrungFalk SchwendickeNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-7 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Medicine R Science Q |
spellingShingle |
Medicine R Science Q Joachim Krois Anselmo Garcia Cantu Akhilanand Chaurasia Ranjitkumar Patil Prabhat Kumar Chaudhari Robert Gaudin Sascha Gehrung Falk Schwendicke Generalizability of deep learning models for dental image analysis |
description |
Abstract We assessed the generalizability of deep learning models and how to improve it. Our exemplary use-case was the detection of apical lesions on panoramic radiographs. We employed two datasets of panoramic radiographs from two centers, one in Germany (Charité, Berlin, n = 650) and one in India (KGMU, Lucknow, n = 650): First, U-Net type models were trained on images from Charité (n = 500) and assessed on test sets from Charité and KGMU (each n = 150). Second, the relevance of image characteristics was explored using pixel-value transformations, aligning the image characteristics in the datasets. Third, cross-center training effects on generalizability were evaluated by stepwise replacing Charite with KGMU images. Last, we assessed the impact of the dental status (presence of root-canal fillings or restorations). Models trained only on Charité images showed a (mean ± SD) F1-score of 54.1 ± 0.8% on Charité and 32.7 ± 0.8% on KGMU data (p < 0.001/t-test). Alignment of image data characteristics between the centers did not improve generalizability. However, by gradually increasing the fraction of KGMU images in the training set (from 0 to 100%) the F1-score on KGMU images improved (46.1 ± 0.9%) at a moderate decrease on Charité images (50.9 ± 0.9%, p < 0.01). Model performance was good on KGMU images showing root-canal fillings and/or restorations, but much lower on KGMU images without root-canal fillings and/or restorations. Our deep learning models were not generalizable across centers. Cross-center training improved generalizability. Noteworthy, the dental status, but not image characteristics were relevant. Understanding the reasons behind limits in generalizability helps to mitigate generalizability problems. |
format |
article |
author |
Joachim Krois Anselmo Garcia Cantu Akhilanand Chaurasia Ranjitkumar Patil Prabhat Kumar Chaudhari Robert Gaudin Sascha Gehrung Falk Schwendicke |
author_facet |
Joachim Krois Anselmo Garcia Cantu Akhilanand Chaurasia Ranjitkumar Patil Prabhat Kumar Chaudhari Robert Gaudin Sascha Gehrung Falk Schwendicke |
author_sort |
Joachim Krois |
title |
Generalizability of deep learning models for dental image analysis |
title_short |
Generalizability of deep learning models for dental image analysis |
title_full |
Generalizability of deep learning models for dental image analysis |
title_fullStr |
Generalizability of deep learning models for dental image analysis |
title_full_unstemmed |
Generalizability of deep learning models for dental image analysis |
title_sort |
generalizability of deep learning models for dental image analysis |
publisher |
Nature Portfolio |
publishDate |
2021 |
url |
https://doaj.org/article/ef0ba152d5814f2e9b3dca9c1db648df |
work_keys_str_mv |
AT joachimkrois generalizabilityofdeeplearningmodelsfordentalimageanalysis AT anselmogarciacantu generalizabilityofdeeplearningmodelsfordentalimageanalysis AT akhilanandchaurasia generalizabilityofdeeplearningmodelsfordentalimageanalysis AT ranjitkumarpatil generalizabilityofdeeplearningmodelsfordentalimageanalysis AT prabhatkumarchaudhari generalizabilityofdeeplearningmodelsfordentalimageanalysis AT robertgaudin generalizabilityofdeeplearningmodelsfordentalimageanalysis AT saschagehrung generalizabilityofdeeplearningmodelsfordentalimageanalysis AT falkschwendicke generalizabilityofdeeplearningmodelsfordentalimageanalysis |
_version_ |
1718395737530695680 |