Learning to steer nonlinear interior-point methods

Interior-point or barrier methods handle nonlinear programs by sequentially solving barrier subprograms with a decreasing sequence of barrier parameters. The specific barrier update rule strongly influences the theoretical convergence properties as well as the practical efficiency. While many global...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autor principal: Renke Kuhlmann
Formato: article
Lenguaje:EN
Publicado: Elsevier 2019
Materias:
Acceso en línea:https://doaj.org/article/9c6ab9943b5b419ab8777f61af05c91c
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Interior-point or barrier methods handle nonlinear programs by sequentially solving barrier subprograms with a decreasing sequence of barrier parameters. The specific barrier update rule strongly influences the theoretical convergence properties as well as the practical efficiency. While many global and local convergence analyses consider a monotone update that decreases the barrier parameter for every approximately solved subprogram, computational studies show a superior performance of more adaptive strategies. In this paper we interpret the adaptive barrier update as a reinforcement learning task. A deep Q-learning agent is trained by both imitation and random action selection. Numerical results based on an implementation within the nonlinear programming solver WORHP show that the agent successfully learns to steer the barrier parameter and additionally improves WORHP’s performance on the CUTEst test set.