AUBER: Automated BERT regularization.

How can we effectively regularize BERT? Although BERT proves its effectiveness in various NLP tasks, it often overfits when there are only a small number of training instances. A promising direction to regularize BERT is based on pruning its attention heads with a proxy score for head importance. Ho...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Hyun Dong Lee, Seongmin Lee, U Kang
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/2ef6b30e26174d40a39937b0fed7747f
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!