Entanglement-Structured LSTM Boosts Chaotic Time Series Forecasting

Traditional machine-learning methods are inefficient in capturing chaos in nonlinear dynamical systems, especially when the time difference <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>Δ</m...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Xiangyi Meng, Tong Yang
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Q
Acceso en línea:https://doaj.org/article/3992829212424edfbac8e23f66a783d5
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:3992829212424edfbac8e23f66a783d5
record_format dspace
spelling oai:doaj.org-article:3992829212424edfbac8e23f66a783d52021-11-25T17:30:07ZEntanglement-Structured LSTM Boosts Chaotic Time Series Forecasting10.3390/e231114911099-4300https://doaj.org/article/3992829212424edfbac8e23f66a783d52021-11-01T00:00:00Zhttps://www.mdpi.com/1099-4300/23/11/1491https://doaj.org/toc/1099-4300Traditional machine-learning methods are inefficient in capturing chaos in nonlinear dynamical systems, especially when the time difference <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>Δ</mo><mi>t</mi></mrow></semantics></math></inline-formula> between consecutive steps is so large that the extracted time series looks apparently random. Here, we introduce a new long-short-term-memory (LSTM)-based recurrent architecture by tensorizing the cell-state-to-state propagation therein, maintaining the long-term memory feature of LSTM, while simultaneously enhancing the learning of short-term nonlinear complexity. We stress that the global minima of training can be most efficiently reached by our tensor structure where all nonlinear terms, up to some polynomial order, are treated explicitly and weighted equally. The efficiency and generality of our architecture are systematically investigated and tested through theoretical analysis and experimental examinations. In our design, we have explicitly used two different many-body entanglement structures—matrix product states (MPS) and the multiscale entanglement renormalization ansatz (MERA)—as physics-inspired tensor decomposition techniques, from which we find that MERA generally performs better than MPS, hence conjecturing that the learnability of chaos is determined not only by the number of free parameters but also the tensor complexity—recognized as how entanglement entropy scales with varying matricization of the tensor.Xiangyi MengTong YangMDPI AGarticlequantum entanglementrecurrent neural networkstensorizationchaotic dynamical systemchaotic time series forecastingScienceQAstrophysicsQB460-466PhysicsQC1-999ENEntropy, Vol 23, Iss 1491, p 1491 (2021)
institution DOAJ
collection DOAJ
language EN
topic quantum entanglement
recurrent neural networks
tensorization
chaotic dynamical system
chaotic time series forecasting
Science
Q
Astrophysics
QB460-466
Physics
QC1-999
spellingShingle quantum entanglement
recurrent neural networks
tensorization
chaotic dynamical system
chaotic time series forecasting
Science
Q
Astrophysics
QB460-466
Physics
QC1-999
Xiangyi Meng
Tong Yang
Entanglement-Structured LSTM Boosts Chaotic Time Series Forecasting
description Traditional machine-learning methods are inefficient in capturing chaos in nonlinear dynamical systems, especially when the time difference <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>Δ</mo><mi>t</mi></mrow></semantics></math></inline-formula> between consecutive steps is so large that the extracted time series looks apparently random. Here, we introduce a new long-short-term-memory (LSTM)-based recurrent architecture by tensorizing the cell-state-to-state propagation therein, maintaining the long-term memory feature of LSTM, while simultaneously enhancing the learning of short-term nonlinear complexity. We stress that the global minima of training can be most efficiently reached by our tensor structure where all nonlinear terms, up to some polynomial order, are treated explicitly and weighted equally. The efficiency and generality of our architecture are systematically investigated and tested through theoretical analysis and experimental examinations. In our design, we have explicitly used two different many-body entanglement structures—matrix product states (MPS) and the multiscale entanglement renormalization ansatz (MERA)—as physics-inspired tensor decomposition techniques, from which we find that MERA generally performs better than MPS, hence conjecturing that the learnability of chaos is determined not only by the number of free parameters but also the tensor complexity—recognized as how entanglement entropy scales with varying matricization of the tensor.
format article
author Xiangyi Meng
Tong Yang
author_facet Xiangyi Meng
Tong Yang
author_sort Xiangyi Meng
title Entanglement-Structured LSTM Boosts Chaotic Time Series Forecasting
title_short Entanglement-Structured LSTM Boosts Chaotic Time Series Forecasting
title_full Entanglement-Structured LSTM Boosts Chaotic Time Series Forecasting
title_fullStr Entanglement-Structured LSTM Boosts Chaotic Time Series Forecasting
title_full_unstemmed Entanglement-Structured LSTM Boosts Chaotic Time Series Forecasting
title_sort entanglement-structured lstm boosts chaotic time series forecasting
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/3992829212424edfbac8e23f66a783d5
work_keys_str_mv AT xiangyimeng entanglementstructuredlstmboostschaotictimeseriesforecasting
AT tongyang entanglementstructuredlstmboostschaotictimeseriesforecasting
_version_ 1718412277624864768