Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
Abstract Affective computing has suffered by the precise annotation because the emotions are highly subjective and vague. The music video emotion is complex due to the diverse textual, acoustic, and visual information which can take the form of lyrics, singer voice, sounds from the different instrum...
Guardado en:
Autores principales: | Yagya Raj Pandeya, Bhuwan Bhattarai, Joonwhoan Lee |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/36d1baaf4f5f432b9e390dd08bb4c628 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Video-Conferencing with Audio Software
por: Jon Baggaley, et al.
Publicado: (2006) -
Engagement in video and audio narratives: contrasting self-report and physiological measures
por: Daniel C. Richardson, et al.
Publicado: (2020) -
Research on Music Emotion Intelligent Recognition and Classification Algorithm in Music Performance System
por: Chun Huang, et al.
Publicado: (2021) -
Audio-tactile integration and the influence of musical training.
por: Anja Kuchenbuch, et al.
Publicado: (2014) -
Unsupervised clustering and epigenetic classification of single cells
por: Mahdi Zamanighomi, et al.
Publicado: (2018)