Using large language models to analyze political texts through natural language understanding
Kenneth Benoit et al.
Abstract
Large language models (LLMs) offer scalable alternatives to human experts when analyzing political texts for meaning , using natural language understanding (NLU). Qualitative NLU methods relying on human experts are severely limited by cost and scalability. Statistical text‐as‐data methods are scalable but rely on strong and often unrealistic assumptions. We propose a systematic, scalable, and replicable method that can extend existing qualitative and quantitative approaches by using LLMs to interpret texts meaningfully rather than as mere data. Our ensemble means of LLM‐generated estimates of party positions on six key issue dimensions correlate highly with equivalent mean ratings by country specialists. When applied to coalition policy declarations, LLM estimates align more closely with standard models of government formation than hand‐coded estimates. We conclude with a discussion of the profound implications of modern LLMs for political text analysis.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.