Reinforcement Learning for Optimal Execution When Liquidity Is Time-Varying

Andrea Macrì & Fabrizio Lillo

Applied Mathematical Finance2024https://doi.org/10.1080/1350486x.2025.2490157article
AJG 2ABDC B
Weight
0.47

Abstract

Optimal execution is an important problem faced by any trader. Most solutions are based on the assumption of constant market impact, while liquidity is known to be dynamic. Moreover, models with time-varying liquidity typically assume that it is observable, despite the fact that, in reality, it is latent and hard to measure in real time. In this paper we show that the use of Double Deep Q-learning, a form of Reinforcement Learning based on neural networks, is able to learn optimal trading policies when liquidity is time-varying. Specifically, we consider an Almgren-Chriss framework with temporary and permanent impact parameters following several deterministic and stochastic dynamics. Using extensive numerical experiments, we show that the trained algorithm learns the optimal policy when the analytical solution is available, and overcomes benchmarks and approximated solutions when the solution is not available.

4 citations

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1080/1350486x.2025.2490157

Or copy a formatted citation

@article{andrea2024,
  title        = {{Reinforcement Learning for Optimal Execution When Liquidity Is Time-Varying}},
  author       = {Andrea Macrì & Fabrizio Lillo},
  journal      = {Applied Mathematical Finance},
  year         = {2024},
  doi          = {https://doi.org/https://doi.org/10.1080/1350486x.2025.2490157},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

Reinforcement Learning for Optimal Execution When Liquidity Is Time-Varying

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.47

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.38 × 0.4 = 0.15
M · momentum0.60 × 0.15 = 0.09
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.