StackDA: A Stacked Dual Attention Neural Network for Multivariate Time-Series Forecasting
Multivariate time-series forecasting derives key seasonality from past patterns to predict future time-series. Multi-step forecasting is crucial in the industrial sector because a continuous perspective leads to more effective decisions. However, because it depends on previous prediction values, mul...
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/60abcef94d50444f8d5eaf4ecd88a1e6 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:60abcef94d50444f8d5eaf4ecd88a1e6 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:60abcef94d50444f8d5eaf4ecd88a1e62021-11-05T23:00:23ZStackDA: A Stacked Dual Attention Neural Network for Multivariate Time-Series Forecasting2169-353610.1109/ACCESS.2021.3122910https://doaj.org/article/60abcef94d50444f8d5eaf4ecd88a1e62021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9590491/https://doaj.org/toc/2169-3536Multivariate time-series forecasting derives key seasonality from past patterns to predict future time-series. Multi-step forecasting is crucial in the industrial sector because a continuous perspective leads to more effective decisions. However, because it depends on previous prediction values, multi-step forecasting is highly unstable. To mitigate this problem, we introduce a novel model, named stacked dual attention neural network (StackDA), based on an encoder-decoder. In dual attention, the initial attention is for the time dependency between the encoder and decoder, and the second attention is for the time dependency in the decoder time steps. We stack dual attention to stabilize the long-term dependency and multi-step forecasting problem. We add an autoregression component to resolve the lack of linear properties because our method is based on a nonlinear neural network model. Unlike the conventional autoregressive model, we propose skip autoregressive to deal with multiple seasonalities. Furthermore, we propose a denoising training method to take advantage of both the teacher forcing and without teacher forcing methods. We adopt multi-head fully connected layers for the variable-specific modeling owing to our multivariate time-series data. We add positional encoding to provide the model with time information to recognize seasonality more accurately. We compare our model performance with that of machine learning and deep learning models to verify our approach. Finally, we conduct various experiments, including an ablation study, a seasonality determination test, and a stack attention test, to demonstrate the performance of StackDA.Jungsoo HongJinuk ParkSanghyun ParkIEEEarticleAttention mechanismautoregressive modeldenoising trainingmulti-step forecastingmultivariate time-series forecastingElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 145955-145967 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Attention mechanism autoregressive model denoising training multi-step forecasting multivariate time-series forecasting Electrical engineering. Electronics. Nuclear engineering TK1-9971 |
spellingShingle |
Attention mechanism autoregressive model denoising training multi-step forecasting multivariate time-series forecasting Electrical engineering. Electronics. Nuclear engineering TK1-9971 Jungsoo Hong Jinuk Park Sanghyun Park StackDA: A Stacked Dual Attention Neural Network for Multivariate Time-Series Forecasting |
description |
Multivariate time-series forecasting derives key seasonality from past patterns to predict future time-series. Multi-step forecasting is crucial in the industrial sector because a continuous perspective leads to more effective decisions. However, because it depends on previous prediction values, multi-step forecasting is highly unstable. To mitigate this problem, we introduce a novel model, named stacked dual attention neural network (StackDA), based on an encoder-decoder. In dual attention, the initial attention is for the time dependency between the encoder and decoder, and the second attention is for the time dependency in the decoder time steps. We stack dual attention to stabilize the long-term dependency and multi-step forecasting problem. We add an autoregression component to resolve the lack of linear properties because our method is based on a nonlinear neural network model. Unlike the conventional autoregressive model, we propose skip autoregressive to deal with multiple seasonalities. Furthermore, we propose a denoising training method to take advantage of both the teacher forcing and without teacher forcing methods. We adopt multi-head fully connected layers for the variable-specific modeling owing to our multivariate time-series data. We add positional encoding to provide the model with time information to recognize seasonality more accurately. We compare our model performance with that of machine learning and deep learning models to verify our approach. Finally, we conduct various experiments, including an ablation study, a seasonality determination test, and a stack attention test, to demonstrate the performance of StackDA. |
format |
article |
author |
Jungsoo Hong Jinuk Park Sanghyun Park |
author_facet |
Jungsoo Hong Jinuk Park Sanghyun Park |
author_sort |
Jungsoo Hong |
title |
StackDA: A Stacked Dual Attention Neural Network for Multivariate Time-Series Forecasting |
title_short |
StackDA: A Stacked Dual Attention Neural Network for Multivariate Time-Series Forecasting |
title_full |
StackDA: A Stacked Dual Attention Neural Network for Multivariate Time-Series Forecasting |
title_fullStr |
StackDA: A Stacked Dual Attention Neural Network for Multivariate Time-Series Forecasting |
title_full_unstemmed |
StackDA: A Stacked Dual Attention Neural Network for Multivariate Time-Series Forecasting |
title_sort |
stackda: a stacked dual attention neural network for multivariate time-series forecasting |
publisher |
IEEE |
publishDate |
2021 |
url |
https://doaj.org/article/60abcef94d50444f8d5eaf4ecd88a1e6 |
work_keys_str_mv |
AT jungsoohong stackdaastackeddualattentionneuralnetworkformultivariatetimeseriesforecasting AT jinukpark stackdaastackeddualattentionneuralnetworkformultivariatetimeseriesforecasting AT sanghyunpark stackdaastackeddualattentionneuralnetworkformultivariatetimeseriesforecasting |
_version_ |
1718443982987460608 |