Ready for OR or not? Human reader supplements Eyesi scoring in cataract surgical skills assessment
Madeleine Selvander,1,2 Peter Åsman11Department of Clinical Sciences, Malmö: Ophthalmology, Lund University, Malmö, Sweden; 2Practicum Clinical Skills Centre, Skåne University Hospital, Malmö, SwedenPurpose: To compare the internal computer-based scorin...
Guardado en:
Autores principales: | , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Dove Medical Press
2013
|
Materias: | |
Acceso en línea: | https://doaj.org/article/5e6efae05b424ea89ad2231c52dee398 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | Madeleine Selvander,1,2 Peter Åsman11Department of Clinical Sciences, Malmö: Ophthalmology, Lund University, Malmö, Sweden; 2Practicum Clinical Skills Centre, Skåne University Hospital, Malmö, SwedenPurpose: To compare the internal computer-based scoring with human-based video scoring of cataract modules in the Eyesi virtual reality intraocular surgical simulator, a comparative case series was conducted at the Department of Clinical Sciences – Ophthalmology, Lund University, Skåne University Hospital, Malmö, Sweden.Methods: Seven cataract surgeons and 17 medical students performed one video-recorded trial with each of the capsulorhexis, hydromaneuvers, and phacoemulsification divide-and-conquer modules. For each module, the simulator calculated an overall score for the performance ranging from 0 to 100. Two experienced masked cataract surgeons analyzed each video using the Objective Structured Assessment of Cataract Surgical Skill (OSACSS) for individual models and modified Objective Structured Assessment of Surgical Skills (OSATS) for all three modules together. The average of the two assessors' scores for each tool was used as the video-based performance score. The ability to discriminate surgeons from naive individuals using the simulator score and the video score, respectively, was compared using receiver operating characteristic (ROC) curves.Results: The ROC areas for simulator score did not differ from 0.5 (random) for hydromaneuvers and phacoemulsification modules, yielding unacceptably poor discrimination. OSACSS video scores all showed good ROC areas significantly different from 0.5. The OSACSS video score was also superior compared to the simulator score for the phacoemulsification procedure: ROC area 0.945 vs 0.664 for simulator score (P = 0.010). Corresponding values for capsulorhexis were 0.887 vs 0.761 (P = 0.056) and for hydromaneuvers 0.817 vs 0.571 (P = 0.052) for the video scores and simulator scores, respectively.The ROC area for the combined procedure was 0.938 for OSATS video score and 0.799 for simulator score (P=0.072).Conclusion: Video-based scoring of the phacoemulsification procedure was superior to the innate simulator scoring system in distinguishing cataract surgical skills. Simulator scoring rendered unacceptably poor discrimination for both the hydromaneuvers and the phacoemulsification divide-and-conquer module. Our results indicate a potential for improvement in Eyesi internal computer-based scoring.Keywords: simulator, training, cataract surgery, ROC, virtual reality |
---|