AUBER: Automated BERT regularization.

How can we effectively regularize BERT? Although BERT proves its effectiveness in various NLP tasks, it often overfits when there are only a small number of training instances. A promising direction to regularize BERT is based on pruning its attention heads with a proxy score for head importance. Ho...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Hyun Dong Lee, Seongmin Lee, U Kang
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/2ef6b30e26174d40a39937b0fed7747f
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:2ef6b30e26174d40a39937b0fed7747f
record_format dspace
spelling oai:doaj.org-article:2ef6b30e26174d40a39937b0fed7747f2021-12-02T20:09:55ZAUBER: Automated BERT regularization.1932-620310.1371/journal.pone.0253241https://doaj.org/article/2ef6b30e26174d40a39937b0fed7747f2021-01-01T00:00:00Zhttps://doi.org/10.1371/journal.pone.0253241https://doaj.org/toc/1932-6203How can we effectively regularize BERT? Although BERT proves its effectiveness in various NLP tasks, it often overfits when there are only a small number of training instances. A promising direction to regularize BERT is based on pruning its attention heads with a proxy score for head importance. However, these methods are usually suboptimal since they resort to arbitrarily determined numbers of attention heads to be pruned and do not directly aim for the performance enhancement. In order to overcome such a limitation, we propose AUBER, an automated BERT regularization method, that leverages reinforcement learning to automatically prune the proper attention heads from BERT. We also minimize the model complexity and the action search space by proposing a low-dimensional state representation and dually-greedy approach for training. Experimental results show that AUBER outperforms existing pruning methods by achieving up to 9.58% better performance. In addition, the ablation study demonstrates the effectiveness of design choices for AUBER.Hyun Dong LeeSeongmin LeeU KangPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 16, Iss 6, p e0253241 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Hyun Dong Lee
Seongmin Lee
U Kang
AUBER: Automated BERT regularization.
description How can we effectively regularize BERT? Although BERT proves its effectiveness in various NLP tasks, it often overfits when there are only a small number of training instances. A promising direction to regularize BERT is based on pruning its attention heads with a proxy score for head importance. However, these methods are usually suboptimal since they resort to arbitrarily determined numbers of attention heads to be pruned and do not directly aim for the performance enhancement. In order to overcome such a limitation, we propose AUBER, an automated BERT regularization method, that leverages reinforcement learning to automatically prune the proper attention heads from BERT. We also minimize the model complexity and the action search space by proposing a low-dimensional state representation and dually-greedy approach for training. Experimental results show that AUBER outperforms existing pruning methods by achieving up to 9.58% better performance. In addition, the ablation study demonstrates the effectiveness of design choices for AUBER.
format article
author Hyun Dong Lee
Seongmin Lee
U Kang
author_facet Hyun Dong Lee
Seongmin Lee
U Kang
author_sort Hyun Dong Lee
title AUBER: Automated BERT regularization.
title_short AUBER: Automated BERT regularization.
title_full AUBER: Automated BERT regularization.
title_fullStr AUBER: Automated BERT regularization.
title_full_unstemmed AUBER: Automated BERT regularization.
title_sort auber: automated bert regularization.
publisher Public Library of Science (PLoS)
publishDate 2021
url https://doaj.org/article/2ef6b30e26174d40a39937b0fed7747f
work_keys_str_mv AT hyundonglee auberautomatedbertregularization
AT seongminlee auberautomatedbertregularization
AT ukang auberautomatedbertregularization
_version_ 1718375053523943424