Item Difficulty Modeling Using Fine-tuned Small and Large Language Models

Ming Li et al.

Educational and Psychological Measurement2025https://doi.org/10.1177/00131644251344973article
ABDC A
Weight
0.41

Abstract

This study investigates methods for item difficulty modeling in large-scale assessments using both small and large language models (LLMs). We introduce novel data augmentation strategies, including augmentation on the fly and distribution balancing, that surpass benchmark performances, demonstrating their effectiveness in mitigating data imbalance and improving model performance. Our results showed that fine-tuned small language models (SLMs) such as Bidirectional Encoder Representations from Transformers (BERT) and RoBERTa yielded lower root mean squared error than the first-place model in the BEA 2024 Shared Task competition, whereas domain-specific models like BioClinicalBERT and PubMedBERT did not provide significant improvements due to distributional gaps. Majority voting among SLMs enhanced prediction accuracy, reinforcing the benefits of ensemble learning. LLMs, such as GPT-4, exhibited strong generalization capabilities but struggled with item difficulty prediction, likely due to limited training data and the absence of explicit difficulty-related context. Chain-of-thought prompting and rationale generation approaches were explored but did not yield substantial improvements, suggesting that additional training data or more sophisticated reasoning techniques may be necessary. Embedding-based methods, particularly using NV-Embed-v2, showed promise but did not outperform our best augmentation strategies, indicating that capturing nuanced difficulty-related features remains a challenge.

2 citations

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1177/00131644251344973

Or copy a formatted citation

@article{ming2025,
  title        = {{Item Difficulty Modeling Using Fine-tuned Small and Large Language Models}},
  author       = {Ming Li et al.},
  journal      = {Educational and Psychological Measurement},
  year         = {2025},
  doi          = {https://doi.org/https://doi.org/10.1177/00131644251344973},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

Item Difficulty Modeling Using Fine-tuned Small and Large Language Models

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.41

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.25 × 0.4 = 0.10
M · momentum0.55 × 0.15 = 0.08
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.