Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers

Sky and ground are two essential semantic components in computer vision, robotics, and remote sensing. The sky and ground segmentation has become increasingly popular. This research proposes a sky and ground segmentation framework for the rover navigation visions by adopting weak supervision and tra...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Boyu Kuang, Zeeshan A. Rana, Yifan Zhao
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/2f99883e9da1496c99f2e4db30fb5c74
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:2f99883e9da1496c99f2e4db30fb5c74
record_format dspace
spelling oai:doaj.org-article:2f99883e9da1496c99f2e4db30fb5c742021-11-11T19:02:19ZSky and Ground Segmentation in the Navigation Visions of the Planetary Rovers10.3390/s212169961424-8220https://doaj.org/article/2f99883e9da1496c99f2e4db30fb5c742021-10-01T00:00:00Zhttps://www.mdpi.com/1424-8220/21/21/6996https://doaj.org/toc/1424-8220Sky and ground are two essential semantic components in computer vision, robotics, and remote sensing. The sky and ground segmentation has become increasingly popular. This research proposes a sky and ground segmentation framework for the rover navigation visions by adopting weak supervision and transfer learning technologies. A new sky and ground segmentation neural network (network in U-shaped network (NI-U-Net)) and a conservative annotation method have been proposed. The pre-trained process achieves the best results on a popular open benchmark (the Skyfinder dataset) by evaluating seven metrics compared to the state-of-the-art. These seven metrics achieve 99.232%, 99.211%, 99.221%, 99.104%, 0.0077, 0.0427, and 98.223% on accuracy, precision, recall, dice score (F1), misclassification rate (MCR), root mean squared error (RMSE), and intersection over union (IoU), respectively. The conservative annotation method achieves superior performance with limited manual intervention. The NI-U-Net can operate with 40 frames per second (FPS) to maintain the real-time property. The proposed framework successfully fills the gap between the laboratory results (with rich idea data) and the practical application (in the wild). The achievement can provide essential semantic information (sky and ground) for the rover navigation vision.Boyu KuangZeeshan A. RanaYifan ZhaoMDPI AGarticlesemantic segmentationweak supervisiontransfer learningconservative annotation methodvisual navigationvisual sensorChemical technologyTP1-1185ENSensors, Vol 21, Iss 6996, p 6996 (2021)
institution DOAJ
collection DOAJ
language EN
topic semantic segmentation
weak supervision
transfer learning
conservative annotation method
visual navigation
visual sensor
Chemical technology
TP1-1185
spellingShingle semantic segmentation
weak supervision
transfer learning
conservative annotation method
visual navigation
visual sensor
Chemical technology
TP1-1185
Boyu Kuang
Zeeshan A. Rana
Yifan Zhao
Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers
description Sky and ground are two essential semantic components in computer vision, robotics, and remote sensing. The sky and ground segmentation has become increasingly popular. This research proposes a sky and ground segmentation framework for the rover navigation visions by adopting weak supervision and transfer learning technologies. A new sky and ground segmentation neural network (network in U-shaped network (NI-U-Net)) and a conservative annotation method have been proposed. The pre-trained process achieves the best results on a popular open benchmark (the Skyfinder dataset) by evaluating seven metrics compared to the state-of-the-art. These seven metrics achieve 99.232%, 99.211%, 99.221%, 99.104%, 0.0077, 0.0427, and 98.223% on accuracy, precision, recall, dice score (F1), misclassification rate (MCR), root mean squared error (RMSE), and intersection over union (IoU), respectively. The conservative annotation method achieves superior performance with limited manual intervention. The NI-U-Net can operate with 40 frames per second (FPS) to maintain the real-time property. The proposed framework successfully fills the gap between the laboratory results (with rich idea data) and the practical application (in the wild). The achievement can provide essential semantic information (sky and ground) for the rover navigation vision.
format article
author Boyu Kuang
Zeeshan A. Rana
Yifan Zhao
author_facet Boyu Kuang
Zeeshan A. Rana
Yifan Zhao
author_sort Boyu Kuang
title Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers
title_short Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers
title_full Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers
title_fullStr Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers
title_full_unstemmed Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers
title_sort sky and ground segmentation in the navigation visions of the planetary rovers
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/2f99883e9da1496c99f2e4db30fb5c74
work_keys_str_mv AT boyukuang skyandgroundsegmentationinthenavigationvisionsoftheplanetaryrovers
AT zeeshanarana skyandgroundsegmentationinthenavigationvisionsoftheplanetaryrovers
AT yifanzhao skyandgroundsegmentationinthenavigationvisionsoftheplanetaryrovers
_version_ 1718431633894277120