On the Dangers of Large‐Language Model Mediated Learning for Human Capital

Dirk Lindebaum et al.

Human Resource Management Journal (UK)2026https://doi.org/10.1111/1748-8583.70036article
AJG 4*ABDC A*
Weight
0.50

Abstract

Against the dominant view in HRM concerning the value‐creating use of large language models (LLMs) in relation to Human Capital, our provocation asks whether LLMs will enhance or compromise Human Capital at work in the long‐run. We feel compelled to ask this question because Human Capital represents employees' accumulated learning experiences, which provide the knowledge and skills needed to perform effectively at work. However, knowledge is a multifaceted rather than monolithic phenomenon, requiring a more differentiated treatment when considering the use of LLMs at work, its effects on different types of knowledge and, eventually, the formation of Human Capital. We mobilise digitally mediated learning—where synthetic inputs replace first‐hand experience—to theorise mechanisms for explaining how LLMs (as one Gen‐AI application producing synthetic content) shape different types of knowledge, and the formation of Human Capital. We identify two mechanisms, namely, (i) multiple degrees of abstraction from the concrete real‐world to the digital world and (ii) the conflation of ‘word form’ and ‘meaning’ in outputs of LLMs. We consider the theoretical and practical ramifications of our provocation for the development of Human Capital in the age of LLMs.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1111/1748-8583.70036

Or copy a formatted citation

@article{dirk2026,
  title        = {{On the Dangers of Large‐Language Model Mediated Learning for Human Capital}},
  author       = {Dirk Lindebaum et al.},
  journal      = {Human Resource Management Journal (UK)},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1111/1748-8583.70036},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

On the Dangers of Large‐Language Model Mediated Learning for Human Capital

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.