A Convolutional Neural Network-Based End-to-End Self-Driving Using LiDAR and Camera Fusion: Analysis Perspectives in a Real-World Environment

In this paper, we develop end-to-end autonomous driving based on a 2D LiDAR sensor and camera sensor that predict the control value of the vehicle from the input data, instead of modeling rule-based autonomous driving. Different from many studies utilizing simulated data, we created an end-to-end au...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Mingyu Park, Hyeonseok Kim, Seongkeun Park
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/1714953ab4284c21a69bc64ff0144aab
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:1714953ab4284c21a69bc64ff0144aab
record_format dspace
spelling oai:doaj.org-article:1714953ab4284c21a69bc64ff0144aab2021-11-11T15:37:54ZA Convolutional Neural Network-Based End-to-End Self-Driving Using LiDAR and Camera Fusion: Analysis Perspectives in a Real-World Environment10.3390/electronics102126082079-9292https://doaj.org/article/1714953ab4284c21a69bc64ff0144aab2021-10-01T00:00:00Zhttps://www.mdpi.com/2079-9292/10/21/2608https://doaj.org/toc/2079-9292In this paper, we develop end-to-end autonomous driving based on a 2D LiDAR sensor and camera sensor that predict the control value of the vehicle from the input data, instead of modeling rule-based autonomous driving. Different from many studies utilizing simulated data, we created an end-to-end autonomous driving algorithm with data obtained from real driving and analyzing the performance of our proposed algorithm. Based on the data obtained from an actual urban driving environment, end-to-end autonomous driving was possible in an informal environment such as a traffic signal by predicting the vehicle control value based on a convolution neural network. In addition, this paper solves the data imbalance problem by eliminating redundant data for each frame during stopping and driving in the driving environment so we can improve the performance of self-driving. Finally, we verified through the activation map how the network predicts the vertical and horizontal control values by recognizing the traffic facilities in the driving environment. Experiments and analysis will be shown to show the validity of the proposed algorithm.Mingyu ParkHyeonseok KimSeongkeun ParkMDPI AGarticleend-to-end controlconvolutional neural networkself-drivingLiDAR sensorvision sensorElectronicsTK7800-8360ENElectronics, Vol 10, Iss 2608, p 2608 (2021)
institution DOAJ
collection DOAJ
language EN
topic end-to-end control
convolutional neural network
self-driving
LiDAR sensor
vision sensor
Electronics
TK7800-8360
spellingShingle end-to-end control
convolutional neural network
self-driving
LiDAR sensor
vision sensor
Electronics
TK7800-8360
Mingyu Park
Hyeonseok Kim
Seongkeun Park
A Convolutional Neural Network-Based End-to-End Self-Driving Using LiDAR and Camera Fusion: Analysis Perspectives in a Real-World Environment
description In this paper, we develop end-to-end autonomous driving based on a 2D LiDAR sensor and camera sensor that predict the control value of the vehicle from the input data, instead of modeling rule-based autonomous driving. Different from many studies utilizing simulated data, we created an end-to-end autonomous driving algorithm with data obtained from real driving and analyzing the performance of our proposed algorithm. Based on the data obtained from an actual urban driving environment, end-to-end autonomous driving was possible in an informal environment such as a traffic signal by predicting the vehicle control value based on a convolution neural network. In addition, this paper solves the data imbalance problem by eliminating redundant data for each frame during stopping and driving in the driving environment so we can improve the performance of self-driving. Finally, we verified through the activation map how the network predicts the vertical and horizontal control values by recognizing the traffic facilities in the driving environment. Experiments and analysis will be shown to show the validity of the proposed algorithm.
format article
author Mingyu Park
Hyeonseok Kim
Seongkeun Park
author_facet Mingyu Park
Hyeonseok Kim
Seongkeun Park
author_sort Mingyu Park
title A Convolutional Neural Network-Based End-to-End Self-Driving Using LiDAR and Camera Fusion: Analysis Perspectives in a Real-World Environment
title_short A Convolutional Neural Network-Based End-to-End Self-Driving Using LiDAR and Camera Fusion: Analysis Perspectives in a Real-World Environment
title_full A Convolutional Neural Network-Based End-to-End Self-Driving Using LiDAR and Camera Fusion: Analysis Perspectives in a Real-World Environment
title_fullStr A Convolutional Neural Network-Based End-to-End Self-Driving Using LiDAR and Camera Fusion: Analysis Perspectives in a Real-World Environment
title_full_unstemmed A Convolutional Neural Network-Based End-to-End Self-Driving Using LiDAR and Camera Fusion: Analysis Perspectives in a Real-World Environment
title_sort convolutional neural network-based end-to-end self-driving using lidar and camera fusion: analysis perspectives in a real-world environment
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/1714953ab4284c21a69bc64ff0144aab
work_keys_str_mv AT mingyupark aconvolutionalneuralnetworkbasedendtoendselfdrivingusinglidarandcamerafusionanalysisperspectivesinarealworldenvironment
AT hyeonseokkim aconvolutionalneuralnetworkbasedendtoendselfdrivingusinglidarandcamerafusionanalysisperspectivesinarealworldenvironment
AT seongkeunpark aconvolutionalneuralnetworkbasedendtoendselfdrivingusinglidarandcamerafusionanalysisperspectivesinarealworldenvironment
AT mingyupark convolutionalneuralnetworkbasedendtoendselfdrivingusinglidarandcamerafusionanalysisperspectivesinarealworldenvironment
AT hyeonseokkim convolutionalneuralnetworkbasedendtoendselfdrivingusinglidarandcamerafusionanalysisperspectivesinarealworldenvironment
AT seongkeunpark convolutionalneuralnetworkbasedendtoendselfdrivingusinglidarandcamerafusionanalysisperspectivesinarealworldenvironment
_version_ 1718434843400863744