Dual‐view 3D human pose estimation without camera parameters for action recognition

Abstract The purpose of 3D human pose estimation is to estimate the 3D coordinates of key points of the human body directly from images. Although multi‐view based methods have better performance and higher precision of coordinate estimation than a single‐view based, they need to know the camera para...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Long Liu, Le Yang, Wanjun Chen, Xin Gao
Formato: article
Lenguaje:EN
Publicado: Wiley 2021
Materias:
Acceso en línea:https://doaj.org/article/b1a281c8a03a41f4b1a5c3d70f0f5386
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Abstract The purpose of 3D human pose estimation is to estimate the 3D coordinates of key points of the human body directly from images. Although multi‐view based methods have better performance and higher precision of coordinate estimation than a single‐view based, they need to know the camera parameters. In order to effectively avoid the restriction of this constraint and improve the generalizability of the model, a dual‐view single‐person 3D pose estimation method without camera parameters is proposed. This method first uses the 2D pose estimation network HR‐net to estimate the 2D joint point coordinates from two images with different views, and then inputs them into the 3D regression network to generate the final 3D joint point coordinates. In order to make the 3D regression network fully learn the spatial structure relationship of the human body and the transformation projection relationship between different views, a self‐supervised training method is designed based on a 3D human pose orthogonal projection model to generate the virtual views. In the pose estimation experiments on the Human3.6 dataset, this method achieves a significantly improved estimation error of 34.5 mm. Furthermore, an action recognition based on the human poses extracted by the proposed method is conducted, and an accuracy of 83.19% is obtained.