Music video emotion classification using slow–fast audio–video network and unsupervised feature representation

Abstract Affective computing has suffered by the precise annotation because the emotions are highly subjective and vague. The music video emotion is complex due to the diverse textual, acoustic, and visual information which can take the form of lyrics, singer voice, sounds from the different instrum...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Yagya Raj Pandeya, Bhuwan Bhattarai, Joonwhoan Lee
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/36d1baaf4f5f432b9e390dd08bb4c628
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:36d1baaf4f5f432b9e390dd08bb4c628
record_format dspace
spelling oai:doaj.org-article:36d1baaf4f5f432b9e390dd08bb4c6282021-12-02T17:13:17ZMusic video emotion classification using slow–fast audio–video network and unsupervised feature representation10.1038/s41598-021-98856-22045-2322https://doaj.org/article/36d1baaf4f5f432b9e390dd08bb4c6282021-10-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-98856-2https://doaj.org/toc/2045-2322Abstract Affective computing has suffered by the precise annotation because the emotions are highly subjective and vague. The music video emotion is complex due to the diverse textual, acoustic, and visual information which can take the form of lyrics, singer voice, sounds from the different instruments, and visual representations. This can be one reason why there has been a limited study in this domain and no standard dataset has been produced before now. In this study, we proposed an unsupervised method for music video emotion analysis using music video contents on the Internet. We also produced a labelled dataset and compared the supervised and unsupervised methods for emotion classification. The music and video information are processed through a multimodal architecture with audio–video information exchange and boosting method. The general 2D and 3D convolution networks compared with the slow–fast network with filter and channel separable convolution in multimodal architecture. Several supervised and unsupervised networks were trained in an end-to-end manner and results were evaluated using various evaluation metrics. The proposed method used a large dataset for unsupervised emotion classification and interpreted the results quantitatively and qualitatively in the music video that had never been applied in the past. The result shows a large increment in classification score using unsupervised features and information sharing techniques on audio and video network. Our best classifier attained 77% accuracy, an f1-score of 0.77, and an area under the curve score of 0.94 with minimum computational cost.Yagya Raj PandeyaBhuwan BhattaraiJoonwhoan LeeNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-14 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Yagya Raj Pandeya
Bhuwan Bhattarai
Joonwhoan Lee
Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
description Abstract Affective computing has suffered by the precise annotation because the emotions are highly subjective and vague. The music video emotion is complex due to the diverse textual, acoustic, and visual information which can take the form of lyrics, singer voice, sounds from the different instruments, and visual representations. This can be one reason why there has been a limited study in this domain and no standard dataset has been produced before now. In this study, we proposed an unsupervised method for music video emotion analysis using music video contents on the Internet. We also produced a labelled dataset and compared the supervised and unsupervised methods for emotion classification. The music and video information are processed through a multimodal architecture with audio–video information exchange and boosting method. The general 2D and 3D convolution networks compared with the slow–fast network with filter and channel separable convolution in multimodal architecture. Several supervised and unsupervised networks were trained in an end-to-end manner and results were evaluated using various evaluation metrics. The proposed method used a large dataset for unsupervised emotion classification and interpreted the results quantitatively and qualitatively in the music video that had never been applied in the past. The result shows a large increment in classification score using unsupervised features and information sharing techniques on audio and video network. Our best classifier attained 77% accuracy, an f1-score of 0.77, and an area under the curve score of 0.94 with minimum computational cost.
format article
author Yagya Raj Pandeya
Bhuwan Bhattarai
Joonwhoan Lee
author_facet Yagya Raj Pandeya
Bhuwan Bhattarai
Joonwhoan Lee
author_sort Yagya Raj Pandeya
title Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
title_short Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
title_full Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
title_fullStr Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
title_full_unstemmed Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
title_sort music video emotion classification using slow–fast audio–video network and unsupervised feature representation
publisher Nature Portfolio
publishDate 2021
url https://doaj.org/article/36d1baaf4f5f432b9e390dd08bb4c628
work_keys_str_mv AT yagyarajpandeya musicvideoemotionclassificationusingslowfastaudiovideonetworkandunsupervisedfeaturerepresentation
AT bhuwanbhattarai musicvideoemotionclassificationusingslowfastaudiovideonetworkandunsupervisedfeaturerepresentation
AT joonwhoanlee musicvideoemotionclassificationusingslowfastaudiovideonetworkandunsupervisedfeaturerepresentation
_version_ 1718381370579877888