Smart Glass System Using Deep Learning for the Blind and Visually Impaired

Individuals suffering from visual impairments and blindness encounter difficulties in moving independently and overcoming various problems in their routine lives. As a solution, artificial intelligence and computer vision approaches facilitate blind and visually impaired (BVI) people in fulfilling t...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Mukhriddin Mukhiddinov, Jinsoo Cho
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/d40be9b6b49d41eb8836a0adb2d199e0
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:d40be9b6b49d41eb8836a0adb2d199e0
record_format dspace
spelling oai:doaj.org-article:d40be9b6b49d41eb8836a0adb2d199e02021-11-25T17:24:23ZSmart Glass System Using Deep Learning for the Blind and Visually Impaired10.3390/electronics102227562079-9292https://doaj.org/article/d40be9b6b49d41eb8836a0adb2d199e02021-11-01T00:00:00Zhttps://www.mdpi.com/2079-9292/10/22/2756https://doaj.org/toc/2079-9292Individuals suffering from visual impairments and blindness encounter difficulties in moving independently and overcoming various problems in their routine lives. As a solution, artificial intelligence and computer vision approaches facilitate blind and visually impaired (BVI) people in fulfilling their primary activities without much dependency on other people. Smart glasses are a potential assistive technology for BVI people to aid in individual travel and provide social comfort and safety. However, practically, the BVI are unable move alone, particularly in dark scenes and at night. In this study we propose a smart glass system for BVI people, employing computer vision techniques and deep learning models, audio feedback, and tactile graphics to facilitate independent movement in a night-time environment. The system is divided into four models: a low-light image enhancement model, an object recognition and audio feedback model, a salient object detection model, and a text-to-speech and tactile graphics generation model. Thus, this system was developed to assist in the following manner: (1) enhancing the contrast of images under low-light conditions employing a two-branch exposure-fusion network; (2) guiding users with audio feedback using a transformer encoder–decoder object detection model that can recognize 133 categories of sound, such as people, animals, cars, etc., and (3) accessing visual information using salient object extraction, text recognition, and refreshable tactile display. We evaluated the performance of the system and achieved competitive performance on the challenging Low-Light and ExDark datasets.Mukhriddin MukhiddinovJinsoo ChoMDPI AGarticlesmart glassesartificial intelligenceblind and visually impaireddeep learninglow-light imagesassistive technologiesElectronicsTK7800-8360ENElectronics, Vol 10, Iss 2756, p 2756 (2021)
institution DOAJ
collection DOAJ
language EN
topic smart glasses
artificial intelligence
blind and visually impaired
deep learning
low-light images
assistive technologies
Electronics
TK7800-8360
spellingShingle smart glasses
artificial intelligence
blind and visually impaired
deep learning
low-light images
assistive technologies
Electronics
TK7800-8360
Mukhriddin Mukhiddinov
Jinsoo Cho
Smart Glass System Using Deep Learning for the Blind and Visually Impaired
description Individuals suffering from visual impairments and blindness encounter difficulties in moving independently and overcoming various problems in their routine lives. As a solution, artificial intelligence and computer vision approaches facilitate blind and visually impaired (BVI) people in fulfilling their primary activities without much dependency on other people. Smart glasses are a potential assistive technology for BVI people to aid in individual travel and provide social comfort and safety. However, practically, the BVI are unable move alone, particularly in dark scenes and at night. In this study we propose a smart glass system for BVI people, employing computer vision techniques and deep learning models, audio feedback, and tactile graphics to facilitate independent movement in a night-time environment. The system is divided into four models: a low-light image enhancement model, an object recognition and audio feedback model, a salient object detection model, and a text-to-speech and tactile graphics generation model. Thus, this system was developed to assist in the following manner: (1) enhancing the contrast of images under low-light conditions employing a two-branch exposure-fusion network; (2) guiding users with audio feedback using a transformer encoder–decoder object detection model that can recognize 133 categories of sound, such as people, animals, cars, etc., and (3) accessing visual information using salient object extraction, text recognition, and refreshable tactile display. We evaluated the performance of the system and achieved competitive performance on the challenging Low-Light and ExDark datasets.
format article
author Mukhriddin Mukhiddinov
Jinsoo Cho
author_facet Mukhriddin Mukhiddinov
Jinsoo Cho
author_sort Mukhriddin Mukhiddinov
title Smart Glass System Using Deep Learning for the Blind and Visually Impaired
title_short Smart Glass System Using Deep Learning for the Blind and Visually Impaired
title_full Smart Glass System Using Deep Learning for the Blind and Visually Impaired
title_fullStr Smart Glass System Using Deep Learning for the Blind and Visually Impaired
title_full_unstemmed Smart Glass System Using Deep Learning for the Blind and Visually Impaired
title_sort smart glass system using deep learning for the blind and visually impaired
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/d40be9b6b49d41eb8836a0adb2d199e0
work_keys_str_mv AT mukhriddinmukhiddinov smartglasssystemusingdeeplearningfortheblindandvisuallyimpaired
AT jinsoocho smartglasssystemusingdeeplearningfortheblindandvisuallyimpaired
_version_ 1718412438804627456