Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation
Lane and road marker segmentation is crucial in autonomous driving, and many related methods have been proposed in this field. However, most of them are based on single-frame prediction, which causes unstable results between frames. Some semantic multi-frame segmentation methods produce error accumu...
Guardado en:
Autores principales: | , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/80c5503b8e5542fcbbb02eaee20d2f47 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:80c5503b8e5542fcbbb02eaee20d2f47 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:80c5503b8e5542fcbbb02eaee20d2f472021-11-11T19:09:09ZLane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation10.3390/s212171561424-8220https://doaj.org/article/80c5503b8e5542fcbbb02eaee20d2f472021-10-01T00:00:00Zhttps://www.mdpi.com/1424-8220/21/21/7156https://doaj.org/toc/1424-8220Lane and road marker segmentation is crucial in autonomous driving, and many related methods have been proposed in this field. However, most of them are based on single-frame prediction, which causes unstable results between frames. Some semantic multi-frame segmentation methods produce error accumulation and are not fast enough. Therefore, we propose a deep learning algorithm that takes into account the continuity information of adjacent image frames, including image sequence processing and an end-to-end trainable multi-input single-output network to jointly process the segmentation of lanes and road markers. In order to emphasize the location of the target with high probability in the adjacent frames and to refine the segmentation result of the current frame, we explicitly consider the time consistency between frames, expand the segmentation region of the previous frame, and use the optical flow of the adjacent frames to reverse the past prediction, then use it as an additional input of the network in training and reasoning, thereby improving the network’s attention to the target area of the past frame. We segmented lanes and road markers on the Baidu Apolloscape lanemark segmentation dataset and CULane dataset, and present benchmarks for different networks. The experimental results show that this method accelerates the segmentation speed of video lanes and road markers by 2.5 times, increases accuracy by 1.4%, and reduces temporal consistency by only 2.2% at most.Guansheng XingZiming ZhuMDPI AGarticlelane and road marker segmentationmask croppingoptical flow estimationsemantic video segmentationtemporal consistencyChemical technologyTP1-1185ENSensors, Vol 21, Iss 7156, p 7156 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
lane and road marker segmentation mask cropping optical flow estimation semantic video segmentation temporal consistency Chemical technology TP1-1185 |
spellingShingle |
lane and road marker segmentation mask cropping optical flow estimation semantic video segmentation temporal consistency Chemical technology TP1-1185 Guansheng Xing Ziming Zhu Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation |
description |
Lane and road marker segmentation is crucial in autonomous driving, and many related methods have been proposed in this field. However, most of them are based on single-frame prediction, which causes unstable results between frames. Some semantic multi-frame segmentation methods produce error accumulation and are not fast enough. Therefore, we propose a deep learning algorithm that takes into account the continuity information of adjacent image frames, including image sequence processing and an end-to-end trainable multi-input single-output network to jointly process the segmentation of lanes and road markers. In order to emphasize the location of the target with high probability in the adjacent frames and to refine the segmentation result of the current frame, we explicitly consider the time consistency between frames, expand the segmentation region of the previous frame, and use the optical flow of the adjacent frames to reverse the past prediction, then use it as an additional input of the network in training and reasoning, thereby improving the network’s attention to the target area of the past frame. We segmented lanes and road markers on the Baidu Apolloscape lanemark segmentation dataset and CULane dataset, and present benchmarks for different networks. The experimental results show that this method accelerates the segmentation speed of video lanes and road markers by 2.5 times, increases accuracy by 1.4%, and reduces temporal consistency by only 2.2% at most. |
format |
article |
author |
Guansheng Xing Ziming Zhu |
author_facet |
Guansheng Xing Ziming Zhu |
author_sort |
Guansheng Xing |
title |
Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation |
title_short |
Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation |
title_full |
Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation |
title_fullStr |
Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation |
title_full_unstemmed |
Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation |
title_sort |
lane and road marker semantic video segmentation using mask cropping and optical flow estimation |
publisher |
MDPI AG |
publishDate |
2021 |
url |
https://doaj.org/article/80c5503b8e5542fcbbb02eaee20d2f47 |
work_keys_str_mv |
AT guanshengxing laneandroadmarkersemanticvideosegmentationusingmaskcroppingandopticalflowestimation AT zimingzhu laneandroadmarkersemanticvideosegmentationusingmaskcroppingandopticalflowestimation |
_version_ |
1718431594569531392 |