Reinforcement learning for continuous-time optimal execution: actor–critic algorithm and error analysis
Boyu Wang et al.
Abstract
We propose an actor–critic reinforcement learning (RL) algorithm for the optimal execution problem. We formulate a mean–quadratic variation objective regularised by Shannon entropy under the celebrated Almgren–Chriss model by allowing stochastic policies. We obtain in closed form the optimal value function and the optimal feedback policy, which is Gaussian. We then utilise these analytical results to parametrise our value function and control policy for RL. While standard actor–critic RL algorithms perform policy evaluation update and policy gradient update alternatingly, we introduce a recalibration step in addition to these two updates, which turns out to be critical for convergence. We develop a finite-time error analysis of our algorithm and show that it converges linearly under suitable conditions on the learning rates. We test our algorithm in three different types of market simulators built on the Almgren–Chriss model, historical data of order flow and a stochastic model of limit order books. Empirical results demonstrate the advantages of our algorithm over the classical statistical approach and a deep-learning-based RL algorithm.
1 citation
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.16 × 0.4 = 0.06 |
| M · momentum | 0.53 × 0.15 = 0.08 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.