Multi‐view facial action unit detection via deep feature enhancement

Abstract Multi‐view facial action unit (AU) analysis has been a challenging research topic due to multiple disturbing variables, including subject identity biases, variational facial action unit intensities, facial occlusions and non‐frontal head‐poses. A deep feature enhancement (DFE) framework is...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Chuangao Tang, Cheng Lu, Wenming Zheng, Yuan Zong, Sunan Li
Formato: article
Lenguaje:EN
Publicado: Wiley 2021
Materias:
Acceso en línea:https://doaj.org/article/ced6bbb8b8f949f39138f41bc36cdb20
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Abstract Multi‐view facial action unit (AU) analysis has been a challenging research topic due to multiple disturbing variables, including subject identity biases, variational facial action unit intensities, facial occlusions and non‐frontal head‐poses. A deep feature enhancement (DFE) framework is presented to tackle some of these coupled complex disturbing variables for multi‐view facial action unit detection. The authors' DFE framework is a novel end‐to‐end three‐stage feature learning model with taking subject identity biases, dynamic facial changes and head‐pose into consideration. It contains three feature enhancement modules, including coarse‐grained local and holistic spatial feature learning (LHSF), spatio‐temporal feature learning (STF) and head‐pose feature disentanglement (FD). Experimental results show that the proposed method achieved state‐of‐the‐art recognition performance on the FERA2017 dataset. The code is released at http://aip.seu.edu.cn/cgtang/.