POAT-Net: Parallel Offset-Attention Assisted Transformer for 3D Object Detection for Autonomous Driving
3D object detection is playing a key role in the perception process of autonomous driving and industrial robots automation. Inherent characteristics of point cloud raise an enormous challenge to both spatial representation and association analysis. Unordered point cloud spatial data structure and de...
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/37203ad23a904e6aa2175a9f34c8069d |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | 3D object detection is playing a key role in the perception process of autonomous driving and industrial robots automation. Inherent characteristics of point cloud raise an enormous challenge to both spatial representation and association analysis. Unordered point cloud spatial data structure and density variations caused by gradually varying distances to LiDAR make accurate and robust 3D object detection even more difficult. In this paper, we present a novel transformer network POAT-Net for 3D point cloud object detection. Transformer is credited with the great success in Natural Language Processing (NLP) and exhibiting inspiring potentials in point cloud processing. Our method POAT-Net is inherently insensitive to element permutations within the unordered point cloud. The associations between local points contribute significantly to 3D object detection or other 3D tasks. Parallel offset-attention is leveraged to highlight and capture subtle associations between local points. To overcome the non-uniform density distribution of different objects, we exploit Normalized multi-resolution Grouping (NMRG) strategy to enhance the non-uniform density adaptive ability for POAT-Net. Quantitative experimental results on KITTI3D dataset demonstrate our method achieves the state-of-the-art performance. |
---|