Efficacy of a comprehensive binary classification model using a deep convolutional neural network for wireless capsule endoscopy

Abstract The manual reading of capsule endoscopy (CE) videos in small bowel disease diagnosis is time-intensive. Algorithms introduced to automate this process are premature for real clinical applications, and multi-diagnosis using these methods has not been sufficiently validated. Therefore, we dev...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Sang Hoon Kim, Youngbae Hwang, Dong Jun Oh, Ji Hyung Nam, Ki Bae Kim, Junseok Park, Hyun Joo Song, Yun Jeong Lim
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/c7ec7dda26254e068eb086a14b675a25
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:c7ec7dda26254e068eb086a14b675a25
record_format dspace
spelling oai:doaj.org-article:c7ec7dda26254e068eb086a14b675a252021-12-02T19:04:19ZEfficacy of a comprehensive binary classification model using a deep convolutional neural network for wireless capsule endoscopy10.1038/s41598-021-96748-z2045-2322https://doaj.org/article/c7ec7dda26254e068eb086a14b675a252021-09-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-96748-zhttps://doaj.org/toc/2045-2322Abstract The manual reading of capsule endoscopy (CE) videos in small bowel disease diagnosis is time-intensive. Algorithms introduced to automate this process are premature for real clinical applications, and multi-diagnosis using these methods has not been sufficiently validated. Therefore, we developed a practical binary classification model, which selectively identifies clinically meaningful images including inflamed mucosa, atypical vascularity or bleeding, and tested it with unseen cases. Four hundred thousand CE images were randomly selected from 84 cases in which 240,000 images were used to train the algorithm to categorize images binarily. The remaining images were utilized for validation and internal testing. The algorithm was externally tested with 256,591 unseen images. The diagnostic accuracy of the trained model applied to the validation set was 98.067%. In contrast, the accuracy of the model when applied to a dataset provided by an independent hospital that did not participate during training was 85.470%. The area under the curve (AUC) was 0.922. Our model showed excellent internal test results, and the misreadings were slightly increased when the model was tested in unseen external cases while the classified ‘insignificant’ images contain ambiguous substances. Once this limitation is solved, the proposed CNN-based binary classification will be a promising candidate for developing clinically-ready computer-aided reading methods.Sang Hoon KimYoungbae HwangDong Jun OhJi Hyung NamKi Bae KimJunseok ParkHyun Joo SongYun Jeong LimNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-11 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Sang Hoon Kim
Youngbae Hwang
Dong Jun Oh
Ji Hyung Nam
Ki Bae Kim
Junseok Park
Hyun Joo Song
Yun Jeong Lim
Efficacy of a comprehensive binary classification model using a deep convolutional neural network for wireless capsule endoscopy
description Abstract The manual reading of capsule endoscopy (CE) videos in small bowel disease diagnosis is time-intensive. Algorithms introduced to automate this process are premature for real clinical applications, and multi-diagnosis using these methods has not been sufficiently validated. Therefore, we developed a practical binary classification model, which selectively identifies clinically meaningful images including inflamed mucosa, atypical vascularity or bleeding, and tested it with unseen cases. Four hundred thousand CE images were randomly selected from 84 cases in which 240,000 images were used to train the algorithm to categorize images binarily. The remaining images were utilized for validation and internal testing. The algorithm was externally tested with 256,591 unseen images. The diagnostic accuracy of the trained model applied to the validation set was 98.067%. In contrast, the accuracy of the model when applied to a dataset provided by an independent hospital that did not participate during training was 85.470%. The area under the curve (AUC) was 0.922. Our model showed excellent internal test results, and the misreadings were slightly increased when the model was tested in unseen external cases while the classified ‘insignificant’ images contain ambiguous substances. Once this limitation is solved, the proposed CNN-based binary classification will be a promising candidate for developing clinically-ready computer-aided reading methods.
format article
author Sang Hoon Kim
Youngbae Hwang
Dong Jun Oh
Ji Hyung Nam
Ki Bae Kim
Junseok Park
Hyun Joo Song
Yun Jeong Lim
author_facet Sang Hoon Kim
Youngbae Hwang
Dong Jun Oh
Ji Hyung Nam
Ki Bae Kim
Junseok Park
Hyun Joo Song
Yun Jeong Lim
author_sort Sang Hoon Kim
title Efficacy of a comprehensive binary classification model using a deep convolutional neural network for wireless capsule endoscopy
title_short Efficacy of a comprehensive binary classification model using a deep convolutional neural network for wireless capsule endoscopy
title_full Efficacy of a comprehensive binary classification model using a deep convolutional neural network for wireless capsule endoscopy
title_fullStr Efficacy of a comprehensive binary classification model using a deep convolutional neural network for wireless capsule endoscopy
title_full_unstemmed Efficacy of a comprehensive binary classification model using a deep convolutional neural network for wireless capsule endoscopy
title_sort efficacy of a comprehensive binary classification model using a deep convolutional neural network for wireless capsule endoscopy
publisher Nature Portfolio
publishDate 2021
url https://doaj.org/article/c7ec7dda26254e068eb086a14b675a25
work_keys_str_mv AT sanghoonkim efficacyofacomprehensivebinaryclassificationmodelusingadeepconvolutionalneuralnetworkforwirelesscapsuleendoscopy
AT youngbaehwang efficacyofacomprehensivebinaryclassificationmodelusingadeepconvolutionalneuralnetworkforwirelesscapsuleendoscopy
AT dongjunoh efficacyofacomprehensivebinaryclassificationmodelusingadeepconvolutionalneuralnetworkforwirelesscapsuleendoscopy
AT jihyungnam efficacyofacomprehensivebinaryclassificationmodelusingadeepconvolutionalneuralnetworkforwirelesscapsuleendoscopy
AT kibaekim efficacyofacomprehensivebinaryclassificationmodelusingadeepconvolutionalneuralnetworkforwirelesscapsuleendoscopy
AT junseokpark efficacyofacomprehensivebinaryclassificationmodelusingadeepconvolutionalneuralnetworkforwirelesscapsuleendoscopy
AT hyunjoosong efficacyofacomprehensivebinaryclassificationmodelusingadeepconvolutionalneuralnetworkforwirelesscapsuleendoscopy
AT yunjeonglim efficacyofacomprehensivebinaryclassificationmodelusingadeepconvolutionalneuralnetworkforwirelesscapsuleendoscopy
_version_ 1718377236382351360