Part-Based Attribute-Aware Network for Person Re-Identification

Despite the rapid progress over the past decade, person re-identification (reID) remains a challenging task due to the fact that discriminative features underlying different granularities are easily affected by illumination and camera-view variation. Most deep learning-based algorithms for reID extr...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Yan Zhang, Xusheng Gu, Jun Tang, Ke Cheng, Shoubiao Tan
Formato: article
Lenguaje:EN
Publicado: IEEE 2019
Materias:
Acceso en línea:https://doaj.org/article/fdb14800318c4dadb21c26d14b9fbd85
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Despite the rapid progress over the past decade, person re-identification (reID) remains a challenging task due to the fact that discriminative features underlying different granularities are easily affected by illumination and camera-view variation. Most deep learning-based algorithms for reID extract global embedding as the representation of the pedestrian from the convolutional neural network. Considering that person attributes are robust and informative to identify pedestrians. This paper proposes a multi-branch model, namely part-based attribute-aware network (PAAN), to leverage both person reID and attribute performance, which not only utilizes ID label visible to the whole image but also utilizes attribute information. In order to learn discriminative and robust global representation which is invariant to the fact mentioned above, we resort to global and local person attributes to build global and local representation, respectively, utilizing our proposed layered partition strategy. Our goal is to exploit global or local semantic information to guide the optimization of global representation. Besides, in order to enhance the global representation, we design a semantic bridge replenishing mid-level semantic information for the final representation, which contains high-level semantic information. The extensive experiments are conducted to demonstrate the effectiveness of our proposed approach on two large-scale person re-identification datasets including Market-1501 and DukeMTMC-reID, and our approach achieves rank-1 of 92.40% on Market-1501 and 82.59% on DukeMTMC-reID showing strong competitiveness among the start of the art.