An Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay for Path Planning of Unmanned Surface Vehicles
Unmanned Surface Vehicle (USV) has a broad application prospect and autonomous path planning as its crucial technology has developed into a hot research direction in the field of USV research. This paper proposes an Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay (IPD3Q...
Guardado en:
Autores principales: | , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/42518bc1d13e4b07816ad089f5e92f37 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:42518bc1d13e4b07816ad089f5e92f37 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:42518bc1d13e4b07816ad089f5e92f372021-11-25T18:04:54ZAn Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay for Path Planning of Unmanned Surface Vehicles10.3390/jmse91112672077-1312https://doaj.org/article/42518bc1d13e4b07816ad089f5e92f372021-11-01T00:00:00Zhttps://www.mdpi.com/2077-1312/9/11/1267https://doaj.org/toc/2077-1312Unmanned Surface Vehicle (USV) has a broad application prospect and autonomous path planning as its crucial technology has developed into a hot research direction in the field of USV research. This paper proposes an Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay (IPD3QN) to address the slow and unstable convergence of traditional Deep Q Network (DQN) algorithms in autonomous path planning of USV. Firstly, we use the deep double Q-Network to decouple the selection and calculation of the target Q value action to eliminate overestimation. The prioritized experience replay method is adopted to extract experience samples from the experience replay unit, increase the utilization rate of actual samples, and accelerate the training speed of the neural network. Then, the neural network is optimized by introducing a dueling network structure. Finally, the soft update method is used to improve the stability of the algorithm, and the dynamic <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>ϵ</mi></semantics></math></inline-formula>-greedy method is used to find the optimal strategy. The experiments are first conducted in the Open AI Gym test platform to pre-validate the algorithm for two classical control problems: the Cart pole and Mountain Car problems. The impact of algorithm hyperparameters on the model performance is analyzed in detail. The algorithm is then validated in the Maze environment. The comparative analysis of simulation experiments shows that IPD3QN has a significant improvement in learning performance regarding convergence speed and convergence stability compared with DQN, D3QN, PD2QN, PDQN, PD3QN. Also, USV can plan the optimal path according to the actual navigation environment with the IPD3QN algorithm.Zhengwei ZhuCan HuChenyang ZhuYanping ZhuYu ShengMDPI AGarticledeep reinforcement learningunmanned surface vehiclepath planningalgorithm optimizationfusion and integrationNaval architecture. Shipbuilding. Marine engineeringVM1-989OceanographyGC1-1581ENJournal of Marine Science and Engineering, Vol 9, Iss 1267, p 1267 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
deep reinforcement learning unmanned surface vehicle path planning algorithm optimization fusion and integration Naval architecture. Shipbuilding. Marine engineering VM1-989 Oceanography GC1-1581 |
spellingShingle |
deep reinforcement learning unmanned surface vehicle path planning algorithm optimization fusion and integration Naval architecture. Shipbuilding. Marine engineering VM1-989 Oceanography GC1-1581 Zhengwei Zhu Can Hu Chenyang Zhu Yanping Zhu Yu Sheng An Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay for Path Planning of Unmanned Surface Vehicles |
description |
Unmanned Surface Vehicle (USV) has a broad application prospect and autonomous path planning as its crucial technology has developed into a hot research direction in the field of USV research. This paper proposes an Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay (IPD3QN) to address the slow and unstable convergence of traditional Deep Q Network (DQN) algorithms in autonomous path planning of USV. Firstly, we use the deep double Q-Network to decouple the selection and calculation of the target Q value action to eliminate overestimation. The prioritized experience replay method is adopted to extract experience samples from the experience replay unit, increase the utilization rate of actual samples, and accelerate the training speed of the neural network. Then, the neural network is optimized by introducing a dueling network structure. Finally, the soft update method is used to improve the stability of the algorithm, and the dynamic <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>ϵ</mi></semantics></math></inline-formula>-greedy method is used to find the optimal strategy. The experiments are first conducted in the Open AI Gym test platform to pre-validate the algorithm for two classical control problems: the Cart pole and Mountain Car problems. The impact of algorithm hyperparameters on the model performance is analyzed in detail. The algorithm is then validated in the Maze environment. The comparative analysis of simulation experiments shows that IPD3QN has a significant improvement in learning performance regarding convergence speed and convergence stability compared with DQN, D3QN, PD2QN, PDQN, PD3QN. Also, USV can plan the optimal path according to the actual navigation environment with the IPD3QN algorithm. |
format |
article |
author |
Zhengwei Zhu Can Hu Chenyang Zhu Yanping Zhu Yu Sheng |
author_facet |
Zhengwei Zhu Can Hu Chenyang Zhu Yanping Zhu Yu Sheng |
author_sort |
Zhengwei Zhu |
title |
An Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay for Path Planning of Unmanned Surface Vehicles |
title_short |
An Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay for Path Planning of Unmanned Surface Vehicles |
title_full |
An Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay for Path Planning of Unmanned Surface Vehicles |
title_fullStr |
An Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay for Path Planning of Unmanned Surface Vehicles |
title_full_unstemmed |
An Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay for Path Planning of Unmanned Surface Vehicles |
title_sort |
improved dueling deep double-q network based on prioritized experience replay for path planning of unmanned surface vehicles |
publisher |
MDPI AG |
publishDate |
2021 |
url |
https://doaj.org/article/42518bc1d13e4b07816ad089f5e92f37 |
work_keys_str_mv |
AT zhengweizhu animprovedduelingdeepdoubleqnetworkbasedonprioritizedexperiencereplayforpathplanningofunmannedsurfacevehicles AT canhu animprovedduelingdeepdoubleqnetworkbasedonprioritizedexperiencereplayforpathplanningofunmannedsurfacevehicles AT chenyangzhu animprovedduelingdeepdoubleqnetworkbasedonprioritizedexperiencereplayforpathplanningofunmannedsurfacevehicles AT yanpingzhu animprovedduelingdeepdoubleqnetworkbasedonprioritizedexperiencereplayforpathplanningofunmannedsurfacevehicles AT yusheng animprovedduelingdeepdoubleqnetworkbasedonprioritizedexperiencereplayforpathplanningofunmannedsurfacevehicles AT zhengweizhu improvedduelingdeepdoubleqnetworkbasedonprioritizedexperiencereplayforpathplanningofunmannedsurfacevehicles AT canhu improvedduelingdeepdoubleqnetworkbasedonprioritizedexperiencereplayforpathplanningofunmannedsurfacevehicles AT chenyangzhu improvedduelingdeepdoubleqnetworkbasedonprioritizedexperiencereplayforpathplanningofunmannedsurfacevehicles AT yanpingzhu improvedduelingdeepdoubleqnetworkbasedonprioritizedexperiencereplayforpathplanningofunmannedsurfacevehicles AT yusheng improvedduelingdeepdoubleqnetworkbasedonprioritizedexperiencereplayforpathplanningofunmannedsurfacevehicles |
_version_ |
1718411612808806400 |