Deep Reinforcement Learning for Trading—A Critical Survey

Deep reinforcement learning (DRL) has achieved significant results in many machine learning (ML) benchmarks. In this short survey, we provide an overview of DRL applied to trading on financial markets with the purpose of unravelling common structures used in the trading community using DRL, as well...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autor principal: Adrian Millea
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Z
Acceso en línea:https://doaj.org/article/e2617242a51b4451a6674c11d77c1400
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Deep reinforcement learning (DRL) has achieved significant results in many machine learning (ML) benchmarks. In this short survey, we provide an overview of DRL applied to trading on financial markets with the purpose of unravelling common structures used in the trading community using DRL, as well as discovering common issues and limitations of such approaches. We include also a short corpus summarization using Google Scholar. Moreover, we discuss how one can use <i>hierarchy</i> for dividing the problem space, as well as using <i>model-based RL</i> to learn a world model of the trading environment which can be used for prediction. In addition, multiple <i>risk measures</i> are defined and discussed, which not only provide a way of quantifying the performance of various algorithms, but they can also act as (dense) reward-shaping mechanisms for the agent. We discuss in detail the various <i>state representations</i> used for financial markets, which we consider critical for the success and efficiency of such DRL agents. The market in focus for this survey is the cryptocurrency market; the results of this survey are two-fold: firstly, to find the most promising directions for further research and secondly, to show how a lack of consistency in the community can significantly impede research and the development of DRL agents for trading.