Automated recognition of objects and types of forceps in surgical images using deep learning

Abstract Analysis of operative data with convolutional neural networks (CNNs) is expected to improve the knowledge and professional skills of surgeons. Identification of objects in videos recorded during surgery can be used for surgical skill assessment and surgical navigation. The objectives of thi...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Yoshiko Bamba, Shimpei Ogawa, Michio Itabashi, Shingo Kameoka, Takahiro Okamoto, Masakazu Yamamoto
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/990396cdcb9f4a97abaae6b80d4de589
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:990396cdcb9f4a97abaae6b80d4de589
record_format dspace
spelling oai:doaj.org-article:990396cdcb9f4a97abaae6b80d4de5892021-11-21T12:17:18ZAutomated recognition of objects and types of forceps in surgical images using deep learning10.1038/s41598-021-01911-12045-2322https://doaj.org/article/990396cdcb9f4a97abaae6b80d4de5892021-11-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-01911-1https://doaj.org/toc/2045-2322Abstract Analysis of operative data with convolutional neural networks (CNNs) is expected to improve the knowledge and professional skills of surgeons. Identification of objects in videos recorded during surgery can be used for surgical skill assessment and surgical navigation. The objectives of this study were to recognize objects and types of forceps in surgical videos acquired during colorectal surgeries and evaluate detection accuracy. Images (n = 1818) were extracted from 11 surgical videos for model training, and another 500 images were extracted from 6 additional videos for validation. The following 5 types of forceps were selected for annotation: ultrasonic scalpel, grasping, clip, angled (Maryland and right-angled), and spatula. IBM Visual Insights software was used, which incorporates the most popular open-source deep-learning CNN frameworks. In total, 1039/1062 (97.8%) forceps were correctly identified among 500 test images. Calculated recall and precision values were as follows: grasping forceps, 98.1% and 98.0%; ultrasonic scalpel, 99.4% and 93.9%; clip forceps, 96.2% and 92.7%; angled forceps, 94.9% and 100%; and spatula forceps, 98.1% and 94.5%, respectively. Forceps recognition can be achieved with high accuracy using deep-learning models, providing the opportunity to evaluate how forceps are used in various operations.Yoshiko BambaShimpei OgawaMichio ItabashiShingo KameokaTakahiro OkamotoMasakazu YamamotoNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-8 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Yoshiko Bamba
Shimpei Ogawa
Michio Itabashi
Shingo Kameoka
Takahiro Okamoto
Masakazu Yamamoto
Automated recognition of objects and types of forceps in surgical images using deep learning
description Abstract Analysis of operative data with convolutional neural networks (CNNs) is expected to improve the knowledge and professional skills of surgeons. Identification of objects in videos recorded during surgery can be used for surgical skill assessment and surgical navigation. The objectives of this study were to recognize objects and types of forceps in surgical videos acquired during colorectal surgeries and evaluate detection accuracy. Images (n = 1818) were extracted from 11 surgical videos for model training, and another 500 images were extracted from 6 additional videos for validation. The following 5 types of forceps were selected for annotation: ultrasonic scalpel, grasping, clip, angled (Maryland and right-angled), and spatula. IBM Visual Insights software was used, which incorporates the most popular open-source deep-learning CNN frameworks. In total, 1039/1062 (97.8%) forceps were correctly identified among 500 test images. Calculated recall and precision values were as follows: grasping forceps, 98.1% and 98.0%; ultrasonic scalpel, 99.4% and 93.9%; clip forceps, 96.2% and 92.7%; angled forceps, 94.9% and 100%; and spatula forceps, 98.1% and 94.5%, respectively. Forceps recognition can be achieved with high accuracy using deep-learning models, providing the opportunity to evaluate how forceps are used in various operations.
format article
author Yoshiko Bamba
Shimpei Ogawa
Michio Itabashi
Shingo Kameoka
Takahiro Okamoto
Masakazu Yamamoto
author_facet Yoshiko Bamba
Shimpei Ogawa
Michio Itabashi
Shingo Kameoka
Takahiro Okamoto
Masakazu Yamamoto
author_sort Yoshiko Bamba
title Automated recognition of objects and types of forceps in surgical images using deep learning
title_short Automated recognition of objects and types of forceps in surgical images using deep learning
title_full Automated recognition of objects and types of forceps in surgical images using deep learning
title_fullStr Automated recognition of objects and types of forceps in surgical images using deep learning
title_full_unstemmed Automated recognition of objects and types of forceps in surgical images using deep learning
title_sort automated recognition of objects and types of forceps in surgical images using deep learning
publisher Nature Portfolio
publishDate 2021
url https://doaj.org/article/990396cdcb9f4a97abaae6b80d4de589
work_keys_str_mv AT yoshikobamba automatedrecognitionofobjectsandtypesofforcepsinsurgicalimagesusingdeeplearning
AT shimpeiogawa automatedrecognitionofobjectsandtypesofforcepsinsurgicalimagesusingdeeplearning
AT michioitabashi automatedrecognitionofobjectsandtypesofforcepsinsurgicalimagesusingdeeplearning
AT shingokameoka automatedrecognitionofobjectsandtypesofforcepsinsurgicalimagesusingdeeplearning
AT takahirookamoto automatedrecognitionofobjectsandtypesofforcepsinsurgicalimagesusingdeeplearning
AT masakazuyamamoto automatedrecognitionofobjectsandtypesofforcepsinsurgicalimagesusingdeeplearning
_version_ 1718419081181265920