Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning

Algorithmic trading allows investors to avoid emotional and irrational trading decisions and helps them make profits using modern computer technology. In recent years, reinforcement learning has yielded promising results for algorithmic trading. Two prominent challenges in algorithmic trading with r...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Deog-Yeong Park, Ki-Hoon Lee
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/fec1d81fddfe4844bbcbdf81d0705b41
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:fec1d81fddfe4844bbcbdf81d0705b41
record_format dspace
spelling oai:doaj.org-article:fec1d81fddfe4844bbcbdf81d0705b412021-11-20T00:01:02ZPractical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning2169-353610.1109/ACCESS.2021.3127209https://doaj.org/article/fec1d81fddfe4844bbcbdf81d0705b412021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9611246/https://doaj.org/toc/2169-3536Algorithmic trading allows investors to avoid emotional and irrational trading decisions and helps them make profits using modern computer technology. In recent years, reinforcement learning has yielded promising results for algorithmic trading. Two prominent challenges in algorithmic trading with reinforcement learning are (1) extracting robust features and (2) learning a profitable trading policy. Another challenge is that it was previously often assumed that both long and short positions are always possible in stock trading; however, taking a short position is risky or sometimes impossible in practice. We propose a practical algorithmic trading method, <italic>SIRL-Trader</italic>, which achieves good profit using only long positions. SIRL-Trader uses offline/online state representation learning (SRL) and imitative reinforcement learning. In offline SRL, we apply dimensionality reduction and clustering to extract robust features whereas, in online SRL, we co-train a regression model with a reinforcement learning model to provide accurate state information for decision-making. In imitative reinforcement learning, we incorporate a behavior cloning technique with the twin-delayed deep deterministic policy gradient (TD3) algorithm and apply multistep learning and dynamic delay to TD3. The experimental results show that SIRL-Trader yields higher profits and offers superior generalization ability compared with state-of-the-art methods.Deog-Yeong ParkKi-Hoon LeeIEEEarticleAlgorithmic tradingdeep learningstate representation learningimitation learningreinforcement learningElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 152310-152321 (2021)
institution DOAJ
collection DOAJ
language EN
topic Algorithmic trading
deep learning
state representation learning
imitation learning
reinforcement learning
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
spellingShingle Algorithmic trading
deep learning
state representation learning
imitation learning
reinforcement learning
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
Deog-Yeong Park
Ki-Hoon Lee
Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning
description Algorithmic trading allows investors to avoid emotional and irrational trading decisions and helps them make profits using modern computer technology. In recent years, reinforcement learning has yielded promising results for algorithmic trading. Two prominent challenges in algorithmic trading with reinforcement learning are (1) extracting robust features and (2) learning a profitable trading policy. Another challenge is that it was previously often assumed that both long and short positions are always possible in stock trading; however, taking a short position is risky or sometimes impossible in practice. We propose a practical algorithmic trading method, <italic>SIRL-Trader</italic>, which achieves good profit using only long positions. SIRL-Trader uses offline/online state representation learning (SRL) and imitative reinforcement learning. In offline SRL, we apply dimensionality reduction and clustering to extract robust features whereas, in online SRL, we co-train a regression model with a reinforcement learning model to provide accurate state information for decision-making. In imitative reinforcement learning, we incorporate a behavior cloning technique with the twin-delayed deep deterministic policy gradient (TD3) algorithm and apply multistep learning and dynamic delay to TD3. The experimental results show that SIRL-Trader yields higher profits and offers superior generalization ability compared with state-of-the-art methods.
format article
author Deog-Yeong Park
Ki-Hoon Lee
author_facet Deog-Yeong Park
Ki-Hoon Lee
author_sort Deog-Yeong Park
title Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning
title_short Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning
title_full Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning
title_fullStr Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning
title_full_unstemmed Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning
title_sort practical algorithmic trading using state representation learning and imitative reinforcement learning
publisher IEEE
publishDate 2021
url https://doaj.org/article/fec1d81fddfe4844bbcbdf81d0705b41
work_keys_str_mv AT deogyeongpark practicalalgorithmictradingusingstaterepresentationlearningandimitativereinforcementlearning
AT kihoonlee practicalalgorithmictradingusingstaterepresentationlearningandimitativereinforcementlearning
_version_ 1718419874831663104