Asymptotically optimal policies for weakly coupled Markov decision processes

Diego Goldsztajn & Konstantin Avrachenkov

Journal of Applied Probability2026https://doi.org/10.1017/jpr.2026.10079preprint
AJG 2ABDC A
Weight
0.50

Abstract

We consider the problem of maximizing the expected average reward obtained over an infinite time horizon by n weakly coupled Markov decision processes. Our setup is a substantial generalization of the multi-armed restless bandit problem that allows for multiple actions and constraints. We establish a connection with a deterministic and continuous-variable control problem where the objective is to maximize the average reward derived from an occupancy measure that represents the empirical distribution of the processes when $n \to \infty$ . We show that a solution of this fluid problem can be used to construct policies for the weakly coupled processes that achieve the maximum expected average reward as $n \to \infty$ , and we give sufficient conditions for the existence of solutions. Under certain assumptions on the constraints, we prove that these conditions are automatically satisfied if the unconstrained single-process problem admits a suitable unichain and aperiodic policy. In particular, the assumptions include multi-armed restless bandits and a broad class of problems with multiple actions and inequality constraints. Also, the policies can be constructed in an explicit way in these cases. Our theoretical results are complemented by several concrete examples and numerical experiments, which include multichain setups that are covered by the theoretical results.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1017/jpr.2026.10079

Or copy a formatted citation

@article{diego2026,
  title        = {{Asymptotically optimal policies for weakly coupled Markov decision processes}},
  author       = {Diego Goldsztajn & Konstantin Avrachenkov},
  journal      = {Journal of Applied Probability},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1017/jpr.2026.10079},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

Asymptotically optimal policies for weakly coupled Markov decision processes

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.