A Zeroth-Order Adaptive Learning Rate Method to Reduce Cost of Hyperparameter Tuning for Deep Learning
Due to powerful data representation ability, deep learning has dramatically improved the state-of-the-art in many practical applications. However, the utility highly depends on fine-tuning of hyper-parameters, including learning rate, batch size, and network initialization. Although many first-order...
Enregistré dans:
Auteurs principaux: | Yanan Li, Xuebin Ren, Fangyuan Zhao, Shusen Yang |
---|---|
Format: | article |
Langue: | EN |
Publié: |
MDPI AG
2021
|
Sujets: | |
Accès en ligne: | https://doaj.org/article/fea104c85f094c74b56e338e92eeb8ae |
Tags: |
Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
|
Documents similaires
-
Self-Tuning Lam Annealing: Learning Hyperparameters While Problem Solving
par: Vincent A. Cicirello
Publié: (2021) -
A post COVID Machine Learning approach in Teaching and Learning methodology to alleviate drawbacks of the e-whiteboards
par: Sudan Jha, et autres
Publié: (2021) -
Applications of Multi-Agent Deep Reinforcement Learning: Models and Algorithms
par: Abdikarim Mohamed Ibrahim, et autres
Publié: (2021) -
An Experimental Study on State Representation Extraction for Vision-Based Deep Reinforcement Learning
par: Junkai Ren, et autres
Publié: (2021) -
Multiclass Skin Cancer Classification Using Ensemble of Fine-Tuned Deep Learning Models
par: Nabeela Kausar, et autres
Publié: (2021)