Using large language models to analyze political texts through natural language understanding

Kenneth Benoit et al.

American Journal of Political Science2026https://doi.org/10.1111/ajps.70050article
AJG 4*ABDC A*
Weight
0.50

Abstract

Large language models (LLMs) offer scalable alternatives to human experts when analyzing political texts for meaning , using natural language understanding (NLU). Qualitative NLU methods relying on human experts are severely limited by cost and scalability. Statistical text‐as‐data methods are scalable but rely on strong and often unrealistic assumptions. We propose a systematic, scalable, and replicable method that can extend existing qualitative and quantitative approaches by using LLMs to interpret texts meaningfully rather than as mere data. Our ensemble means of LLM‐generated estimates of party positions on six key issue dimensions correlate highly with equivalent mean ratings by country specialists. When applied to coalition policy declarations, LLM estimates align more closely with standard models of government formation than hand‐coded estimates. We conclude with a discussion of the profound implications of modern LLMs for political text analysis.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1111/ajps.70050

Or copy a formatted citation

@article{kenneth2026,
  title        = {{Using large language models to analyze political texts through natural language understanding}},
  author       = {Kenneth Benoit et al.},
  journal      = {American Journal of Political Science},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1111/ajps.70050},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

Using large language models to analyze political texts through natural language understanding

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.