Persistence in factor-based supervised learning models

In this paper, we document the importance of memory in machine learning (ML)-based models relying on firm characteristics for asset pricing. We find that predictive algorithms perform best when they are trained on long samples, with long-term returns as dependent variables. In addition, we report th...

Description complète

Enregistré dans:
Détails bibliographiques
Auteur principal: Guillaume Coqueret
Format: article
Langue:EN
Publié: KeAi Communications Co., Ltd. 2022
Sujets:
C45
C53
G11
G12
Accès en ligne:https://doaj.org/article/3d705e58b42b4cf7a6d9cbe210af6116
Tags: Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
Description
Résumé:In this paper, we document the importance of memory in machine learning (ML)-based models relying on firm characteristics for asset pricing. We find that predictive algorithms perform best when they are trained on long samples, with long-term returns as dependent variables. In addition, we report that persistent features play a prominent role in these models. When applied to portfolio choice, we find that investors are always better off predicting annual returns, even when rebalancing at lower frequencies (monthly or quarterly). Our results remain robust to transaction costs and risk scaling, thus providing useful indications to quantitative asset managers.