From Agreement to Epistemic Alignment: A Signal Detection–Theoretic Model of Inter-Rater Reliability

Irene Gianeselli

Educational and Psychological Measurement2026https://doi.org/10.1177/00131644261417643article
ABDC A
Weight
0.50

Abstract

Inter-rater reliability is commonly assessed using chance-corrected agreement coefficients such as Cohen's κ, which summarize concordance among categorical judgments without modeling the inferential processes that generate them. As a result, κ is sensitive to prevalence imbalance, task difficulty, and heterogeneity in decision criteria and is often misinterpreted as a proxy for diagnostic accuracy or rater competence. This paper reframes inter-rater reliability within a signal detection-theoretic (SDT) framework in which categorical judgments arise from comparisons between latent continuous evidence and rater-specific decision thresholds. Within this generative model, κ can be interpreted as a bounded transformation of discrete strategic variance (i.e., the observable consequence of dispersion in latent decision criteria) rather than as a direct measure of epistemic alignment. To make this structure explicit, we introduce the Strategic Convergence Index (SCI), a normalized functional summarizing convergence in rater decision thresholds under an SDT generative process. SCI is not proposed as a standalone agreement coefficient but as a model-implied quantity whose interpretation depends on explicit assumptions about evidence distributions and decision rules. Monte Carlo simulations show that κ varies systematically with prevalence and perceptual discriminability even when decision-policy alignment is held constant, whereas SCI selectively tracks epistemic alignment and remains invariant to these factors. Supplementary model-based analyses further illustrate that SCI can be recovered as a stable system-level property even under latent-truth uncertainty, whereas individual thresholds may be weakly identified. Together, these results clarify the epistemic meaning of κ and motivate a decomposition of inter-rater reliability into outcome-level agreement and process-level alignment. By linking classical agreement statistics to an explicit generative model of judgment, the Strategic Convergence framework advances reliability assessment from description toward explanation.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1177/00131644261417643

Or copy a formatted citation

@article{irene2026,
  title        = {{From Agreement to Epistemic Alignment: A Signal Detection–Theoretic Model of Inter-Rater Reliability}},
  author       = {Irene Gianeselli},
  journal      = {Educational and Psychological Measurement},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1177/00131644261417643},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

From Agreement to Epistemic Alignment: A Signal Detection–Theoretic Model of Inter-Rater Reliability

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.