Beyond O(T) Regret: Decoupling Learning and Decision Making in Online Linear Programming
Wenzhi Gao et al.
Abstract
Breaking the [Formula: see text] Barrier in Online Linear Programming How can fast first-order methods for online linear programming (OLP) match the performance of more computationally intensive LP-based approaches? In the paper “Beyond [Formula: see text] Regret: Decoupling Learning and Decision Making in Online Linear Programming,” the authors develop a new framework that improves the regret guarantees of scalable first-order algorithms for online linear programming. They show that when the dual problem satisfies a mild error bound condition, first-order methods can achieve [Formula: see text] regret, including [Formula: see text] in continuous support settings and [Formula: see text] in finite support settings. The key insight is to decouple learning and decision making. The algorithm first learns an approximate dual solution and then localizes decisions to a neighborhood around that solution using an adaptive procedure. This decoupling narrows the gap between efficient gradient-based methods and LP-based benchmarks, providing strong theoretical guarantees while maintaining scalability for applications such as revenue management, online advertising, and cloud computing.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.