Revisiting reliability with human and machine learning raters under scoring design and rater configuration in the many‐facet Rasch model
Xingyao Xiao et al.
Abstract
Constructed-response (CR) items are widely used to assess higher order skills but require human scoring, which introduces variability and is costly at scale. Machine learning (ML)-based scoring offers a scalable alternative, yet its psychometric consequences in rater-mediated models remain underexplored. This study examines how scoring design, rater bias, ML inconsistency and model specification affect the reliability of ability estimation in polytomous CR assessments. Using Monte Carlo simulation, we manipulated human and ML rater bias, ML inconsistency and scoring density (complete, overlapping, isolated). Five estimation models were compared, including the Partial Credit Model (PCM) with fixed thresholds and the Many-Facet Partial Credit Model (MFPCM) with and without free calibration. Results showed that systematic bias, not random inconsistency, was the main source of error. Hybrid human-ML scoring improved estimation when raters were unbiased or exhibited opposing biases, but error compounded when biases aligned. Across designs, PCM with fixed thresholds consistently outperformed more complex alternatives, while anchoring CR items to selected-response metrics stabilized MFPCM estimation. The real data application replicated these patterns. Findings show that scoring design and bias structure, rather than model complexity, drive the benefits of hybrid scoring and that anchoring offers a practical strategy for stabilizing estimation.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.