AI support in self‐regulated learning: A decade of technological evolution and meta‐analysis

Jun Xu et al.

British Journal of Educational Technology2026https://doi.org/10.1111/bjet.70058article
AJG 2ABDC A
Weight
0.50

Abstract

This meta‐analysis systematically examines 35 empirical studies (2013–2025) investigating artificial intelligence applications within Zimmerman's cyclical model of self‐regulated learning (SRL). Three principal discoveries emerge: (1) Technological progression has evolved through three co‐existing paradigms: rule‐based architectures, data‐driven adaptive systems and generative AI ecosystems, that demonstrate increasingly sophisticated capabilities for human‐AI collaboration. (2) While AI‐supported SRL interventions yield a moderate overall effect size ( g = 0.507), their impact is uneven; AI is significantly more effective during the task performance phase ( g = 0.574) than in the preparatory forethought phase ( g = 0.401). Notably, generative AI shows markedly superior efficacy across all phases (e.g. g = 0.709 for forethought, g = 0.938 for performance), though high heterogeneity suggests these effects are heavily contingent on specific instructional designs. (3) Moderator analysis identifies optimal contexts in secondary education, natural science disciplines, fully online settings and interventions of medium duration (2–10 weeks), while also revealing that effects are substantially larger when measured by behavioural traces compared to self‐reports. Critically, these findings highlight a persistent performance‐competence divide, suggesting that AI's capacity to scaffold immediate task performance may outpace its current ability to cultivate durable, transferable self‐regulatory competence. The study discusses the implications of this divide and proposes a research agenda focused on designing AI systems that foster genuine learner autonomy. Practitioner notes What is already known about this topic AI technologies have shown promise in supporting self‐regulated learning (SRL) through personalized feedback and metacognitive scaffolding. Previous studies report inconsistent outcomes of AI interventions across SRL phases (forethought, performance, self‐reflection), with limited exploration of phase‐specific impacts Existing meta‐analyses often treat SRL as a unified construct rather than examining how different AI types support distinct self‐regulatory processes. What this paper adds Demonstrates moderate overall effectiveness of AI‐supported SRL ( g = 0.507) with differential impacts across phases: strongest during task performance ( g = 0.574), weaker in forethought ( g = 0.401) and self‐reflection ( g = 0.464) Maps three coexisting AI paradigms—rule‐based, data‐driven and generative AI—revealing that generative AI achieves superior outcomes across all SRL phases, though effectiveness depends on alignment between AI affordances and specific self‐regulatory processes. Identifies optimal implementation contexts through moderator analysis: secondary education settings, natural science disciplines, fully online environments and medium‐duration interventions (2–10 weeks) yield stronger effects. Articulates a critical performance‐competence divide, substantiated by quantifying how AI's impact on observable behaviours ( g = 0.751) is more than double its effect on learners' self‐reported perceptions ( g = 0.369). Implications for practice and/or policy Integrate AI paradigms strategically rather than viewing them as replacements: combine rule‐based systems' stable scaffolding with generative AI's adaptive dialogue to support the full SRL cycle. Design interventions of medium duration (2–10 weeks) to optimize skill acquisition while maintaining engagement, avoiding both novelty effects and scaffold dependency. Adapt AI implementation to disciplinary demands: structured procedural support works well in natural sciences, while social sciences require enhanced support for open‐ended inquiry and critical analysis. Develop assessment frameworks that measure delayed, unsupported performance alongside immediate gains to ensure AI fosters genuine self‐regulatory competence rather than temporary performance enhancement.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1111/bjet.70058

Or copy a formatted citation

@article{jun2026,
  title        = {{AI support in self‐regulated learning: A decade of technological evolution and meta‐analysis}},
  author       = {Jun Xu et al.},
  journal      = {British Journal of Educational Technology},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1111/bjet.70058},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

AI support in self‐regulated learning: A decade of technological evolution and meta‐analysis

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.