A Zeroth-Order Adaptive Learning Rate Method to Reduce Cost of Hyperparameter Tuning for Deep Learning

Due to powerful data representation ability, deep learning has dramatically improved the state-of-the-art in many practical applications. However, the utility highly depends on fine-tuning of hyper-parameters, including learning rate, batch size, and network initialization. Although many first-order...

Full description

Saved in:
Bibliographic Details
Main Authors: Yanan Li, Xuebin Ren, Fangyuan Zhao, Shusen Yang
Format: article
Language:EN
Published: MDPI AG 2021
Subjects:
T
Online Access:https://doaj.org/article/fea104c85f094c74b56e338e92eeb8ae
Tags: Add Tag
No Tags, Be the first to tag this record!