Manifold Feature Fusion with Dynamical Feature Selection for Cross-Subject Emotion Recognition

Affective computing systems can decode cortical activities to facilitate emotional human–computer interaction. However, personalities exist in neurophysiological responses among different users of the brain–computer interface leads to a difficulty for designing a generic emotion recognizer that is a...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Yue Hua, Xiaolong Zhong, Bingxue Zhang, Zhong Yin, Jianhua Zhang
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/b94e1a60f2194e8babba768da21e668a
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:b94e1a60f2194e8babba768da21e668a
record_format dspace
spelling oai:doaj.org-article:b94e1a60f2194e8babba768da21e668a2021-11-25T16:56:24ZManifold Feature Fusion with Dynamical Feature Selection for Cross-Subject Emotion Recognition10.3390/brainsci111113922076-3425https://doaj.org/article/b94e1a60f2194e8babba768da21e668a2021-10-01T00:00:00Zhttps://www.mdpi.com/2076-3425/11/11/1392https://doaj.org/toc/2076-3425Affective computing systems can decode cortical activities to facilitate emotional human–computer interaction. However, personalities exist in neurophysiological responses among different users of the brain–computer interface leads to a difficulty for designing a generic emotion recognizer that is adaptable to a novel individual. It thus brings an obstacle to achieve cross-subject emotion recognition (ER). To tackle this issue, in this study we propose a novel feature selection method, manifold feature fusion and dynamical feature selection (MF-DFS), under transfer learning principle to determine generalizable features that are stably sensitive to emotional variations. The MF-DFS framework takes the advantages of local geometrical information feature selection, domain adaptation based manifold learning, and dynamical feature selection to enhance the accuracy of the ER system. Based on three public databases, DEAP, MAHNOB-HCI and SEED, the performance of the MF-DFS is validated according to the leave-one-subject-out paradigm under two types of electroencephalography features. By defining three emotional classes of each affective dimension, the accuracy of the MF-DFS-based ER classifier is achieved at 0.50–0.48 (DEAP) and 0.46–0.50 (MAHNOBHCI) for arousal and valence emotional dimensions, respectively. For the SEED database, it achieves 0.40 for the valence dimension. The corresponding accuracy is significantly superior to several classical feature selection methods on multiple machine learning models.Yue HuaXiaolong ZhongBingxue ZhangZhong YinJianhua ZhangMDPI AGarticleemotion recognitionelectroencephalographymachine learningfeature selectiontransfer learningNeurosciences. Biological psychiatry. NeuropsychiatryRC321-571ENBrain Sciences, Vol 11, Iss 1392, p 1392 (2021)
institution DOAJ
collection DOAJ
language EN
topic emotion recognition
electroencephalography
machine learning
feature selection
transfer learning
Neurosciences. Biological psychiatry. Neuropsychiatry
RC321-571
spellingShingle emotion recognition
electroencephalography
machine learning
feature selection
transfer learning
Neurosciences. Biological psychiatry. Neuropsychiatry
RC321-571
Yue Hua
Xiaolong Zhong
Bingxue Zhang
Zhong Yin
Jianhua Zhang
Manifold Feature Fusion with Dynamical Feature Selection for Cross-Subject Emotion Recognition
description Affective computing systems can decode cortical activities to facilitate emotional human–computer interaction. However, personalities exist in neurophysiological responses among different users of the brain–computer interface leads to a difficulty for designing a generic emotion recognizer that is adaptable to a novel individual. It thus brings an obstacle to achieve cross-subject emotion recognition (ER). To tackle this issue, in this study we propose a novel feature selection method, manifold feature fusion and dynamical feature selection (MF-DFS), under transfer learning principle to determine generalizable features that are stably sensitive to emotional variations. The MF-DFS framework takes the advantages of local geometrical information feature selection, domain adaptation based manifold learning, and dynamical feature selection to enhance the accuracy of the ER system. Based on three public databases, DEAP, MAHNOB-HCI and SEED, the performance of the MF-DFS is validated according to the leave-one-subject-out paradigm under two types of electroencephalography features. By defining three emotional classes of each affective dimension, the accuracy of the MF-DFS-based ER classifier is achieved at 0.50–0.48 (DEAP) and 0.46–0.50 (MAHNOBHCI) for arousal and valence emotional dimensions, respectively. For the SEED database, it achieves 0.40 for the valence dimension. The corresponding accuracy is significantly superior to several classical feature selection methods on multiple machine learning models.
format article
author Yue Hua
Xiaolong Zhong
Bingxue Zhang
Zhong Yin
Jianhua Zhang
author_facet Yue Hua
Xiaolong Zhong
Bingxue Zhang
Zhong Yin
Jianhua Zhang
author_sort Yue Hua
title Manifold Feature Fusion with Dynamical Feature Selection for Cross-Subject Emotion Recognition
title_short Manifold Feature Fusion with Dynamical Feature Selection for Cross-Subject Emotion Recognition
title_full Manifold Feature Fusion with Dynamical Feature Selection for Cross-Subject Emotion Recognition
title_fullStr Manifold Feature Fusion with Dynamical Feature Selection for Cross-Subject Emotion Recognition
title_full_unstemmed Manifold Feature Fusion with Dynamical Feature Selection for Cross-Subject Emotion Recognition
title_sort manifold feature fusion with dynamical feature selection for cross-subject emotion recognition
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/b94e1a60f2194e8babba768da21e668a
work_keys_str_mv AT yuehua manifoldfeaturefusionwithdynamicalfeatureselectionforcrosssubjectemotionrecognition
AT xiaolongzhong manifoldfeaturefusionwithdynamicalfeatureselectionforcrosssubjectemotionrecognition
AT bingxuezhang manifoldfeaturefusionwithdynamicalfeatureselectionforcrosssubjectemotionrecognition
AT zhongyin manifoldfeaturefusionwithdynamicalfeatureselectionforcrosssubjectemotionrecognition
AT jianhuazhang manifoldfeaturefusionwithdynamicalfeatureselectionforcrosssubjectemotionrecognition
_version_ 1718412859282554880