The Eighty Five Percent Rule for optimal learning

Is there an optimum difficulty level for training? In this paper, the authors show that for the widely-used class of stochastic gradient-descent based learning algorithms, learning is fastest when the accuracy during training is 85%.

Guardado en:
Detalles Bibliográficos
Autores principales: Robert C. Wilson, Amitai Shenhav, Mark Straccia, Jonathan D. Cohen
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2019
Materias:
Q
Acceso en línea:https://doaj.org/article/d1aeacdf1f304a25962f05454cebb437
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Is there an optimum difficulty level for training? In this paper, the authors show that for the widely-used class of stochastic gradient-descent based learning algorithms, learning is fastest when the accuracy during training is 85%.