LLM‐based prior elicitation for Bayesian graphical modeling
Nikola Sekulovski et al.
Abstract
In the Bayesian graphical modeling framework, priors on network structure encode theoretical assumptions and uncertainty about the topology of psychological constructs under study. For instance, the Bernoulli prior specifies the probability of each pairwise interaction, the Beta-Bernoulli prior governs expected network density, and the Stochastic Block prior models clustering. In practice, however, specifying informed hyperparameters is challenging: theoretical guidance is limited, and default choices can be overly simplistic or restrictive. To address this, we introduce an LLM-based prior elicitation framework in which a large language model provides inclusion judgments for each variable pair. These judgments are converted into edge-specific prior probabilities for the Bernoulli prior and used to derive hyperparameters for the Beta-Bernoulli and Stochastic Block priors. To make the approach accessible, we provide an R package, bgmElicit, with a Shiny app implementing the methodology. We illustrate the framework in two examples. First, a validation on a subset of a PTSD network from a meta-analysis compares OpenAI GPT models across several conditions. Second, an empirical analysis of 17 PTSD symptoms shows that elicited priors can modestly strengthen evidence regarding edge presence and absence. Taken together, this work is a proof of concept, complementary to expert judgment and prior sensitivity checks.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.