An interpretable multiple-instance approach for the detection of referable diabetic retinopathy in fundus images

Abstract Diabetic retinopathy (DR) is one of the leading causes of vision loss across the world. Yet despite its wide prevalence, the majority of affected people lack access to the specialized ophthalmologists and equipment required for monitoring their condition. This can lead to delays in the star...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Alexandros Papadopoulos, Fotis Topouzis, Anastasios Delopoulos
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/c9c9724992b54955aeca1777eed06637
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Abstract Diabetic retinopathy (DR) is one of the leading causes of vision loss across the world. Yet despite its wide prevalence, the majority of affected people lack access to the specialized ophthalmologists and equipment required for monitoring their condition. This can lead to delays in the start of treatment, thereby lowering their chances for a successful outcome. Machine learning systems that automatically detect the disease in eye fundus images have been proposed as a means of facilitating access to retinopathy severity estimates for patients in remote regions or even for complementing the human expert’s diagnosis. Here we propose a machine learning system for the detection of referable diabetic retinopathy in fundus images, which is based on the paradigm of multiple-instance learning. Our method extracts local information independently from multiple rectangular image patches and combines it efficiently through an attention mechanism that focuses on the abnormal regions of the eye (i.e. those that contain DR-induced lesions), thus resulting in a final image representation that is suitable for classification. Furthermore, by leveraging the attention mechanism our algorithm can seamlessly produce informative heatmaps that highlight the regions where the lesions are located. We evaluate our approach on the publicly available Kaggle, Messidor-2 and IDRiD retinal image datasets, in which it exhibits near state-of-the-art classification performance (AUC of 0.961 in Kaggle and 0.976 in Messidor-2), while also producing valid lesion heatmaps (AUPRC of 0.869 in the 81 images of IDRiD that contain pixel-level lesion annotations). Our results suggest that the proposed approach provides an efficient and interpretable solution against the problem of automated diabetic retinopathy grading.