Optimizing a polynomial function on a quantum processor

Abstract The gradient descent method is central to numerical optimization and is the key ingredient in many machine learning algorithms. It promises to find a local minimum of a function by iteratively moving along the direction of the steepest descent. Since for high-dimensional problems the requir...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Keren Li, Shijie Wei, Pan Gao, Feihao Zhang, Zengrong Zhou, Tao Xin, Xiaoting Wang, Patrick Rebentrost, Guilu Long
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
Acceso en línea:https://doaj.org/article/ee291d3fa219422aa4c45b7a814186f3
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Abstract The gradient descent method is central to numerical optimization and is the key ingredient in many machine learning algorithms. It promises to find a local minimum of a function by iteratively moving along the direction of the steepest descent. Since for high-dimensional problems the required computational resources can be prohibitive, it is desirable to investigate quantum versions of the gradient descent, such as the recently proposed (Rebentrost et al.1). Here, we develop this protocol and implement it on a quantum processor with limited resources. A prototypical experiment is shown with a four-qubit nuclear magnetic resonance quantum processor, which demonstrates the iterative optimization process. Experimentally, the final point converged to the local minimum with a fidelity >94%, quantified via full-state tomography. Moreover, our method can be employed to a multidimensional scaling problem, showing the potential to outperform its classical counterparts. Considering the ongoing efforts in quantum information and data science, our work may provide a faster approach to solving high-dimensional optimization problems and a subroutine for future practical quantum computers.