Nonsmooth spectral gradient methods for unconstrained optimization

To solve nonsmooth unconstrained minimization problems, we combine the spectral choice of step length with two well-established subdifferential-type schemes: the gradient sampling method and the simplex gradient method. We focus on the interesting case in which the objective function is continuously...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Milagros Loreto, Hugo Aponte, Debora Cores, Marcos Raydan
Formato: article
Lenguaje:EN
Publicado: Elsevier 2017
Materias:
Acceso en línea:https://doaj.org/article/09769c1ca2fc49daa0950264b364bf55
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:09769c1ca2fc49daa0950264b364bf55
record_format dspace
spelling oai:doaj.org-article:09769c1ca2fc49daa0950264b364bf552021-12-02T05:01:05ZNonsmooth spectral gradient methods for unconstrained optimization2192-440610.1007/s13675-017-0080-8https://doaj.org/article/09769c1ca2fc49daa0950264b364bf552017-12-01T00:00:00Zhttp://www.sciencedirect.com/science/article/pii/S2192440621000927https://doaj.org/toc/2192-4406To solve nonsmooth unconstrained minimization problems, we combine the spectral choice of step length with two well-established subdifferential-type schemes: the gradient sampling method and the simplex gradient method. We focus on the interesting case in which the objective function is continuously differentiable almost everywhere, and it is often not differentiable at minimizers. In the case of the gradient sampling method, we also present a simple differentiability test that allows us to use the exact gradient direction as frequently as possible, and to build a stochastic subdifferential direction only if the test fails. The proposed spectral gradient sampling method is combined with a monotone line search globalization strategy. On the other hand, the simplex gradient method is a direct search method that only requires function evaluations to build an approximation to the gradient direction. In this case, the proposed spectral simplex gradient method is combined with a suitable nonmonotone line search strategy. For both scenarios, we discuss convergence properties and present numerical results on a set of nonsmooth test functions. These numerical results indicate that using a spectral step length can improve the practical performance of both methods.Milagros LoretoHugo AponteDebora CoresMarcos RaydanElsevierarticle49M0590C5690C2665K10Applied mathematics. Quantitative methodsT57-57.97Electronic computers. Computer scienceQA75.5-76.95ENEURO Journal on Computational Optimization, Vol 5, Iss 4, Pp 529-553 (2017)
institution DOAJ
collection DOAJ
language EN
topic 49M05
90C56
90C26
65K10
Applied mathematics. Quantitative methods
T57-57.97
Electronic computers. Computer science
QA75.5-76.95
spellingShingle 49M05
90C56
90C26
65K10
Applied mathematics. Quantitative methods
T57-57.97
Electronic computers. Computer science
QA75.5-76.95
Milagros Loreto
Hugo Aponte
Debora Cores
Marcos Raydan
Nonsmooth spectral gradient methods for unconstrained optimization
description To solve nonsmooth unconstrained minimization problems, we combine the spectral choice of step length with two well-established subdifferential-type schemes: the gradient sampling method and the simplex gradient method. We focus on the interesting case in which the objective function is continuously differentiable almost everywhere, and it is often not differentiable at minimizers. In the case of the gradient sampling method, we also present a simple differentiability test that allows us to use the exact gradient direction as frequently as possible, and to build a stochastic subdifferential direction only if the test fails. The proposed spectral gradient sampling method is combined with a monotone line search globalization strategy. On the other hand, the simplex gradient method is a direct search method that only requires function evaluations to build an approximation to the gradient direction. In this case, the proposed spectral simplex gradient method is combined with a suitable nonmonotone line search strategy. For both scenarios, we discuss convergence properties and present numerical results on a set of nonsmooth test functions. These numerical results indicate that using a spectral step length can improve the practical performance of both methods.
format article
author Milagros Loreto
Hugo Aponte
Debora Cores
Marcos Raydan
author_facet Milagros Loreto
Hugo Aponte
Debora Cores
Marcos Raydan
author_sort Milagros Loreto
title Nonsmooth spectral gradient methods for unconstrained optimization
title_short Nonsmooth spectral gradient methods for unconstrained optimization
title_full Nonsmooth spectral gradient methods for unconstrained optimization
title_fullStr Nonsmooth spectral gradient methods for unconstrained optimization
title_full_unstemmed Nonsmooth spectral gradient methods for unconstrained optimization
title_sort nonsmooth spectral gradient methods for unconstrained optimization
publisher Elsevier
publishDate 2017
url https://doaj.org/article/09769c1ca2fc49daa0950264b364bf55
work_keys_str_mv AT milagrosloreto nonsmoothspectralgradientmethodsforunconstrainedoptimization
AT hugoaponte nonsmoothspectralgradientmethodsforunconstrainedoptimization
AT deboracores nonsmoothspectralgradientmethodsforunconstrainedoptimization
AT marcosraydan nonsmoothspectralgradientmethodsforunconstrainedoptimization
_version_ 1718400811357175808