Indoor/Outdoor Semantic Segmentation Using Deep Learning for Visually Impaired Wheelchair Users
Electrical Powered Wheelchair (EPW) users may find navigation through indoor and outdoor environments a significant challenge due to their disabilities. Moreover, they may suffer from near-sightedness or cognitive problems that limit their driving experience. Developing a system that can help EPW us...
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/f9f5c9ac15d64a0983c7144ef1623f2c |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:f9f5c9ac15d64a0983c7144ef1623f2c |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:f9f5c9ac15d64a0983c7144ef1623f2c2021-11-18T00:11:05ZIndoor/Outdoor Semantic Segmentation Using Deep Learning for Visually Impaired Wheelchair Users2169-353610.1109/ACCESS.2021.3123952https://doaj.org/article/f9f5c9ac15d64a0983c7144ef1623f2c2021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9594521/https://doaj.org/toc/2169-3536Electrical Powered Wheelchair (EPW) users may find navigation through indoor and outdoor environments a significant challenge due to their disabilities. Moreover, they may suffer from near-sightedness or cognitive problems that limit their driving experience. Developing a system that can help EPW users to navigate safely by providing visual feedback and further assistance when needed can have a significant impact on the user’s wellbeing. This paper presents computer vision systems based on deep learning, with an architecture based on residual blocks that can semantically segment high-resolution images. The systems are modified versions of DeepLab version 3 plus that can process high-resolution input images. Besides, they can simultaneously process images from indoor and outdoor environments, which is challenging due to the difference in data distribution and context. The proposed systems replace the base network with a smaller one and modify the encoder-decoder architecture. Nevertheless, they produce high-quality outputs with fast inference speed compared to the systems with deeper base networks. Two datasets are used to train the semantic segmentation systems: an indoor application-based dataset that has been collected and annotated manually and an outdoor dataset to cover both environments. The user can toggle between the two individual systems depending on the situation. Moreover, we proposed shared systems that automatically use a specific semantic segmentation system depending on the pixels’ confidence scores. The annotated output scene is presented to the EPW user, which can aid with the user’s independent navigation. State-of-the-art semantic segmentation techniques are discussed and compared. Results show the ability of the proposed systems to detect objects with sharp edges and high accuracy for indoor and outdoor environments. The developed systems are deployed on a GPU based board and then integrated on an EPW for practical usage and evaluation. The used indoor dataset is made publicly available online.Elhassan MohamedKonstantinos SirlantzisGareth HowellsIEEEarticleCNN architecturedisabled peopledeep learningobject localizationobject detectionpixels classificationElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 147914-147932 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
CNN architecture disabled people deep learning object localization object detection pixels classification Electrical engineering. Electronics. Nuclear engineering TK1-9971 |
spellingShingle |
CNN architecture disabled people deep learning object localization object detection pixels classification Electrical engineering. Electronics. Nuclear engineering TK1-9971 Elhassan Mohamed Konstantinos Sirlantzis Gareth Howells Indoor/Outdoor Semantic Segmentation Using Deep Learning for Visually Impaired Wheelchair Users |
description |
Electrical Powered Wheelchair (EPW) users may find navigation through indoor and outdoor environments a significant challenge due to their disabilities. Moreover, they may suffer from near-sightedness or cognitive problems that limit their driving experience. Developing a system that can help EPW users to navigate safely by providing visual feedback and further assistance when needed can have a significant impact on the user’s wellbeing. This paper presents computer vision systems based on deep learning, with an architecture based on residual blocks that can semantically segment high-resolution images. The systems are modified versions of DeepLab version 3 plus that can process high-resolution input images. Besides, they can simultaneously process images from indoor and outdoor environments, which is challenging due to the difference in data distribution and context. The proposed systems replace the base network with a smaller one and modify the encoder-decoder architecture. Nevertheless, they produce high-quality outputs with fast inference speed compared to the systems with deeper base networks. Two datasets are used to train the semantic segmentation systems: an indoor application-based dataset that has been collected and annotated manually and an outdoor dataset to cover both environments. The user can toggle between the two individual systems depending on the situation. Moreover, we proposed shared systems that automatically use a specific semantic segmentation system depending on the pixels’ confidence scores. The annotated output scene is presented to the EPW user, which can aid with the user’s independent navigation. State-of-the-art semantic segmentation techniques are discussed and compared. Results show the ability of the proposed systems to detect objects with sharp edges and high accuracy for indoor and outdoor environments. The developed systems are deployed on a GPU based board and then integrated on an EPW for practical usage and evaluation. The used indoor dataset is made publicly available online. |
format |
article |
author |
Elhassan Mohamed Konstantinos Sirlantzis Gareth Howells |
author_facet |
Elhassan Mohamed Konstantinos Sirlantzis Gareth Howells |
author_sort |
Elhassan Mohamed |
title |
Indoor/Outdoor Semantic Segmentation Using Deep Learning for Visually Impaired Wheelchair Users |
title_short |
Indoor/Outdoor Semantic Segmentation Using Deep Learning for Visually Impaired Wheelchair Users |
title_full |
Indoor/Outdoor Semantic Segmentation Using Deep Learning for Visually Impaired Wheelchair Users |
title_fullStr |
Indoor/Outdoor Semantic Segmentation Using Deep Learning for Visually Impaired Wheelchair Users |
title_full_unstemmed |
Indoor/Outdoor Semantic Segmentation Using Deep Learning for Visually Impaired Wheelchair Users |
title_sort |
indoor/outdoor semantic segmentation using deep learning for visually impaired wheelchair users |
publisher |
IEEE |
publishDate |
2021 |
url |
https://doaj.org/article/f9f5c9ac15d64a0983c7144ef1623f2c |
work_keys_str_mv |
AT elhassanmohamed indooroutdoorsemanticsegmentationusingdeeplearningforvisuallyimpairedwheelchairusers AT konstantinossirlantzis indooroutdoorsemanticsegmentationusingdeeplearningforvisuallyimpairedwheelchairusers AT garethhowells indooroutdoorsemanticsegmentationusingdeeplearningforvisuallyimpairedwheelchairusers |
_version_ |
1718425201045143552 |