AdaCN: An Adaptive Cubic Newton Method for Nonconvex Stochastic Optimization

In this work, we introduce AdaCN, a novel adaptive cubic Newton method for nonconvex stochastic optimization. AdaCN dynamically captures the curvature of the loss landscape by diagonally approximated Hessian plus the norm of difference between previous two estimates. It only requires at most first o...

Full description

Saved in:
Bibliographic Details
Main Authors: Yan Liu, Maojun Zhang, Zhiwei Zhong, Xiangrong Zeng
Format: article
Language:EN
Published: Hindawi Limited 2021
Subjects:
Online Access:https://doaj.org/article/09154e3ff5c64b9a8f24e212323ae4d8
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this work, we introduce AdaCN, a novel adaptive cubic Newton method for nonconvex stochastic optimization. AdaCN dynamically captures the curvature of the loss landscape by diagonally approximated Hessian plus the norm of difference between previous two estimates. It only requires at most first order gradients and updates with linear complexity for both time and memory. In order to reduce the variance introduced by the stochastic nature of the problem, AdaCN hires the first and second moment to implement and exponential moving average on iteratively updated stochastic gradients and approximated stochastic Hessians, respectively. We validate AdaCN in extensive experiments, showing that it outperforms other stochastic first order methods (including SGD, Adam, and AdaBound) and stochastic quasi-Newton method (i.e., Apollo), in terms of both convergence speed and generalization performance.