Deep Reinforcement Learning for Trading—A Critical Survey
Deep reinforcement learning (DRL) has achieved significant results in many machine learning (ML) benchmarks. In this short survey, we provide an overview of DRL applied to trading on financial markets with the purpose of unravelling common structures used in the trading community using DRL, as well...
Guardado en:
Autor principal: | |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/e2617242a51b4451a6674c11d77c1400 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:e2617242a51b4451a6674c11d77c1400 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:e2617242a51b4451a6674c11d77c14002021-11-25T17:19:53ZDeep Reinforcement Learning for Trading—A Critical Survey10.3390/data61101192306-5729https://doaj.org/article/e2617242a51b4451a6674c11d77c14002021-11-01T00:00:00Zhttps://www.mdpi.com/2306-5729/6/11/119https://doaj.org/toc/2306-5729Deep reinforcement learning (DRL) has achieved significant results in many machine learning (ML) benchmarks. In this short survey, we provide an overview of DRL applied to trading on financial markets with the purpose of unravelling common structures used in the trading community using DRL, as well as discovering common issues and limitations of such approaches. We include also a short corpus summarization using Google Scholar. Moreover, we discuss how one can use <i>hierarchy</i> for dividing the problem space, as well as using <i>model-based RL</i> to learn a world model of the trading environment which can be used for prediction. In addition, multiple <i>risk measures</i> are defined and discussed, which not only provide a way of quantifying the performance of various algorithms, but they can also act as (dense) reward-shaping mechanisms for the agent. We discuss in detail the various <i>state representations</i> used for financial markets, which we consider critical for the success and efficiency of such DRL agents. The market in focus for this survey is the cryptocurrency market; the results of this survey are two-fold: firstly, to find the most promising directions for further research and secondly, to show how a lack of consistency in the community can significantly impede research and the development of DRL agents for trading.Adrian MilleaMDPI AGarticledeep reinforcement learningmodel-based RLhierarchytradingcryptocurrencyforeign exchangeBibliography. Library science. Information resourcesZENData, Vol 6, Iss 119, p 119 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
deep reinforcement learning model-based RL hierarchy trading cryptocurrency foreign exchange Bibliography. Library science. Information resources Z |
spellingShingle |
deep reinforcement learning model-based RL hierarchy trading cryptocurrency foreign exchange Bibliography. Library science. Information resources Z Adrian Millea Deep Reinforcement Learning for Trading—A Critical Survey |
description |
Deep reinforcement learning (DRL) has achieved significant results in many machine learning (ML) benchmarks. In this short survey, we provide an overview of DRL applied to trading on financial markets with the purpose of unravelling common structures used in the trading community using DRL, as well as discovering common issues and limitations of such approaches. We include also a short corpus summarization using Google Scholar. Moreover, we discuss how one can use <i>hierarchy</i> for dividing the problem space, as well as using <i>model-based RL</i> to learn a world model of the trading environment which can be used for prediction. In addition, multiple <i>risk measures</i> are defined and discussed, which not only provide a way of quantifying the performance of various algorithms, but they can also act as (dense) reward-shaping mechanisms for the agent. We discuss in detail the various <i>state representations</i> used for financial markets, which we consider critical for the success and efficiency of such DRL agents. The market in focus for this survey is the cryptocurrency market; the results of this survey are two-fold: firstly, to find the most promising directions for further research and secondly, to show how a lack of consistency in the community can significantly impede research and the development of DRL agents for trading. |
format |
article |
author |
Adrian Millea |
author_facet |
Adrian Millea |
author_sort |
Adrian Millea |
title |
Deep Reinforcement Learning for Trading—A Critical Survey |
title_short |
Deep Reinforcement Learning for Trading—A Critical Survey |
title_full |
Deep Reinforcement Learning for Trading—A Critical Survey |
title_fullStr |
Deep Reinforcement Learning for Trading—A Critical Survey |
title_full_unstemmed |
Deep Reinforcement Learning for Trading—A Critical Survey |
title_sort |
deep reinforcement learning for trading—a critical survey |
publisher |
MDPI AG |
publishDate |
2021 |
url |
https://doaj.org/article/e2617242a51b4451a6674c11d77c1400 |
work_keys_str_mv |
AT adrianmillea deepreinforcementlearningfortradingacriticalsurvey |
_version_ |
1718412500685291520 |