Deep imitation reinforcement learning for self‐driving by vision
Abstract Deep reinforcement learning has achieved some remarkable results in self‐driving. There is quite a lot of work to do in the area of autonomous driving with high real‐time requirements because of the inefficiency of reinforcement learning in exploring large continuous motion spaces. A deep i...
Guardado en:
Autores principales: | , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Wiley
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/fe4e98da1afc4fd195e561d3feac3d0c |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:fe4e98da1afc4fd195e561d3feac3d0c |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:fe4e98da1afc4fd195e561d3feac3d0c2021-11-17T03:12:43ZDeep imitation reinforcement learning for self‐driving by vision2468-232210.1049/cit2.12025https://doaj.org/article/fe4e98da1afc4fd195e561d3feac3d0c2021-12-01T00:00:00Zhttps://doi.org/10.1049/cit2.12025https://doaj.org/toc/2468-2322Abstract Deep reinforcement learning has achieved some remarkable results in self‐driving. There is quite a lot of work to do in the area of autonomous driving with high real‐time requirements because of the inefficiency of reinforcement learning in exploring large continuous motion spaces. A deep imitation reinforcement learning (DIRL) framework is presented to learn control policies of self‐driving vehicles, which is based on a deep deterministic policy gradient algorithm (DDPG) by vision. The DIRL framework comprises two components, the perception module and the control module, using imitation learning (IL) and DDPG, respectively. The perception module employs the IL network as an encoder which processes an image into a low‐dimensional feature vector. This vector is then delivered to the control module which outputs control commands. Meanwhile, the actor network of the DDPG is initialized with the trained IL network to improve exploration efficiency. In addition, a reward function for reinforcement learning is defined to improve the stability of self‐driving vehicles, especially on curves. DIRL is verified by the open racing car simulator (TORCS), and the results show that the correct control strategy is learned successfully and has less training time.Qijie ZouKang XiongQiang FangBohan JiangWileyarticleComputational linguistics. Natural language processingP98-98.5Computer softwareQA76.75-76.765ENCAAI Transactions on Intelligence Technology, Vol 6, Iss 4, Pp 493-503 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Computational linguistics. Natural language processing P98-98.5 Computer software QA76.75-76.765 |
spellingShingle |
Computational linguistics. Natural language processing P98-98.5 Computer software QA76.75-76.765 Qijie Zou Kang Xiong Qiang Fang Bohan Jiang Deep imitation reinforcement learning for self‐driving by vision |
description |
Abstract Deep reinforcement learning has achieved some remarkable results in self‐driving. There is quite a lot of work to do in the area of autonomous driving with high real‐time requirements because of the inefficiency of reinforcement learning in exploring large continuous motion spaces. A deep imitation reinforcement learning (DIRL) framework is presented to learn control policies of self‐driving vehicles, which is based on a deep deterministic policy gradient algorithm (DDPG) by vision. The DIRL framework comprises two components, the perception module and the control module, using imitation learning (IL) and DDPG, respectively. The perception module employs the IL network as an encoder which processes an image into a low‐dimensional feature vector. This vector is then delivered to the control module which outputs control commands. Meanwhile, the actor network of the DDPG is initialized with the trained IL network to improve exploration efficiency. In addition, a reward function for reinforcement learning is defined to improve the stability of self‐driving vehicles, especially on curves. DIRL is verified by the open racing car simulator (TORCS), and the results show that the correct control strategy is learned successfully and has less training time. |
format |
article |
author |
Qijie Zou Kang Xiong Qiang Fang Bohan Jiang |
author_facet |
Qijie Zou Kang Xiong Qiang Fang Bohan Jiang |
author_sort |
Qijie Zou |
title |
Deep imitation reinforcement learning for self‐driving by vision |
title_short |
Deep imitation reinforcement learning for self‐driving by vision |
title_full |
Deep imitation reinforcement learning for self‐driving by vision |
title_fullStr |
Deep imitation reinforcement learning for self‐driving by vision |
title_full_unstemmed |
Deep imitation reinforcement learning for self‐driving by vision |
title_sort |
deep imitation reinforcement learning for self‐driving by vision |
publisher |
Wiley |
publishDate |
2021 |
url |
https://doaj.org/article/fe4e98da1afc4fd195e561d3feac3d0c |
work_keys_str_mv |
AT qijiezou deepimitationreinforcementlearningforselfdrivingbyvision AT kangxiong deepimitationreinforcementlearningforselfdrivingbyvision AT qiangfang deepimitationreinforcementlearningforselfdrivingbyvision AT bohanjiang deepimitationreinforcementlearningforselfdrivingbyvision |
_version_ |
1718426026831249408 |