Enhancing semantics with multi‐objective reinforcement learning for video description

Abstract Video description is challenging due to the high complexity of translating visual content into language. In most popular attention‐based pipelines for this task, visual features and previously generated words are usually concatenated as a vector to predict the current word. However, the err...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Qinyu Li, Longyu Yang, Pengjie Tang, Hanli Wang
Formato: article
Lenguaje:EN
Publicado: Wiley 2021
Materias:
Acceso en línea:https://doaj.org/article/1f50686212ac4f60b204af786657d346
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Abstract Video description is challenging due to the high complexity of translating visual content into language. In most popular attention‐based pipelines for this task, visual features and previously generated words are usually concatenated as a vector to predict the current word. However, the errors caused by the inaccuracy of the predicted words may be accumulated, and the gap between visual features and language features may bring noises into the description model. Facing these problems, a variant of recurrent neural network is designed in this work, and a novel framework is developed to enhance the visual clues for video description. Moreover, a multi‐objective reinforcement learning strategy is implemented to build a more comprehensive reward with multiple metrics to improve the consistency and semantics of the generated description sentence. The experiments on the benchmark MSR‐VTT2016 and MSVD datasets demonstrate the effectiveness of the proposed approach.