On the Consistency of Automatic Scoring with Large Language Models

Mingfeng Xue et al.

Educational and Psychological Measurement2026https://doi.org/10.1177/00131644261418138article
ABDC A
Weight
0.50

Abstract

Large language models (LLMs) have shown great potential in automatic scoring. However, due to model characteristics and variation in training materials and pipelines, scoring inconsistency can exist within an LLM and across LLMs when rating the same response multiple times. This study investigates the intra-LLM and inter-LLM consistency in scoring with five LLMs (i.e., Claude, DeepSeek, Gemini, GPT, and Qwen), variability under different temperatures, and their relationship with scoring accuracy. Moreover, a voting strategy that assembles information from different LLMs was proposed to address inconsistent scoring. Using constructed-response items from a science education assessment and open-source data from the Automated Student Assessment Prize (ASAP), we find that: (a) LLMs generally exhibited almost perfect intra-LLM consistency regardless of temperature; (b) inter-LLM consistency was moderate, with higher agreement observed for items that were easier to score; (c) intra-LLM consistency consistently exceeded inter-LLM consistency, supporting the expectation that within-model consistency represents an upper bound for cross-model agreement; (d) intra-LLM consistency was not associated with scoring accuracy, whereas inter-LLM consistency showed a strong positive relationship with accuracy; and (e) majority voting across LLMs improved scoring accuracy by leveraging complementary strengths of different models.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1177/00131644261418138

Or copy a formatted citation

@article{mingfeng2026,
  title        = {{On the Consistency of Automatic Scoring with Large Language Models}},
  author       = {Mingfeng Xue et al.},
  journal      = {Educational and Psychological Measurement},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1177/00131644261418138},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

On the Consistency of Automatic Scoring with Large Language Models

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.