A Zeroth-Order Adaptive Learning Rate Method to Reduce Cost of Hyperparameter Tuning for Deep Learning
Due to powerful data representation ability, deep learning has dramatically improved the state-of-the-art in many practical applications. However, the utility highly depends on fine-tuning of hyper-parameters, including learning rate, batch size, and network initialization. Although many first-order...
Saved in:
Main Authors: | Yanan Li, Xuebin Ren, Fangyuan Zhao, Shusen Yang |
---|---|
Format: | article |
Language: | EN |
Published: |
MDPI AG
2021
|
Subjects: | |
Online Access: | https://doaj.org/article/fea104c85f094c74b56e338e92eeb8ae |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Self-Tuning Lam Annealing: Learning Hyperparameters While Problem Solving
by: Vincent A. Cicirello
Published: (2021) -
A post COVID Machine Learning approach in Teaching and Learning methodology to alleviate drawbacks of the e-whiteboards
by: Sudan Jha, et al.
Published: (2021) -
Applications of Multi-Agent Deep Reinforcement Learning: Models and Algorithms
by: Abdikarim Mohamed Ibrahim, et al.
Published: (2021) -
An Experimental Study on State Representation Extraction for Vision-Based Deep Reinforcement Learning
by: Junkai Ren, et al.
Published: (2021) -
Multiclass Skin Cancer Classification Using Ensemble of Fine-Tuned Deep Learning Models
by: Nabeela Kausar, et al.
Published: (2021)