A Zeroth-Order Adaptive Learning Rate Method to Reduce Cost of Hyperparameter Tuning for Deep Learning
Due to powerful data representation ability, deep learning has dramatically improved the state-of-the-art in many practical applications. However, the utility highly depends on fine-tuning of hyper-parameters, including learning rate, batch size, and network initialization. Although many first-order...
Guardado en:
Autores principales: | Yanan Li, Xuebin Ren, Fangyuan Zhao, Shusen Yang |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/fea104c85f094c74b56e338e92eeb8ae |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Self-Tuning Lam Annealing: Learning Hyperparameters While Problem Solving
por: Vincent A. Cicirello
Publicado: (2021) -
A post COVID Machine Learning approach in Teaching and Learning methodology to alleviate drawbacks of the e-whiteboards
por: Sudan Jha, et al.
Publicado: (2021) -
Applications of Multi-Agent Deep Reinforcement Learning: Models and Algorithms
por: Abdikarim Mohamed Ibrahim, et al.
Publicado: (2021) -
An Experimental Study on State Representation Extraction for Vision-Based Deep Reinforcement Learning
por: Junkai Ren, et al.
Publicado: (2021) -
Multiclass Skin Cancer Classification Using Ensemble of Fine-Tuned Deep Learning Models
por: Nabeela Kausar, et al.
Publicado: (2021)