A Sampling-Based Gittins Index Approximation
Stef Baas et al.
Abstract
A sampling-based method is introduced to approximate the Gittins index for a general family of alternative bandit processes. The approximation consists of a truncation of the optimization horizon and support for the immediate rewards, an optimal stopping value approximation, and a stochastic approximation procedure. Finite-time error bounds are given for the three approximations, leading to a procedure to construct a confidence interval for the Gittins index using a finite number of Monte Carlo samples as well as an epsilon-optimal policy for the family of alternative bandit processes. Proofs are given for almost sure convergence and a central limit theorem for the sampling-based Gittins index approximation. In a numerical study, the quality of the approximation is verified for the Bernoulli bandit and the Gaussian bandit with known variance, and the method is shown to significantly outperform Thompson sampling and the Bayesian upper-confidence-bound algorithms for a novel random effects multi-armed bandit.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.