Sequential Gibbs Posteriors with Applications to Principal Component Analysis
Steven L. Winter et al.
Abstract
Gibbs posteriors are proportional to a prior distribution multiplied by an exponentiated loss function, with a key tuning parameter that weights the information in the loss relative to the prior and provides control of posterior uncertainty. Gibbs posteriors provide a principled framework for likelihood-free Bayesian inference; however, in many situations, the inclusion of a single tuning parameter inevitably leads to poor uncertainty quantification. In particular, regardless of the value of the parameter, credible regions are far from attaining nominal frequentist coverage, even in large samples. We propose a sequential extension to Gibbs posteriors to address this problem. We prove that the proposed sequential posterior exhibits concentration and satisfies a Bernstein–von Mises theorem, which holds under easily verifiable conditions in Euclidean space and on manifolds. As a by-product, we obtain the first Bernstein–von Mises theorem for traditional likelihood-based Bayesian posteriors on manifolds. All methods are illustrated with an application to principal component analysis.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.