On the Dangers of Large‐Language Model Mediated Learning for Human Capital
Dirk Lindebaum et al.
Abstract
Against the dominant view in HRM concerning the value‐creating use of large language models (LLMs) in relation to Human Capital, our provocation asks whether LLMs will enhance or compromise Human Capital at work in the long‐run. We feel compelled to ask this question because Human Capital represents employees' accumulated learning experiences, which provide the knowledge and skills needed to perform effectively at work. However, knowledge is a multifaceted rather than monolithic phenomenon, requiring a more differentiated treatment when considering the use of LLMs at work, its effects on different types of knowledge and, eventually, the formation of Human Capital. We mobilise digitally mediated learning—where synthetic inputs replace first‐hand experience—to theorise mechanisms for explaining how LLMs (as one Gen‐AI application producing synthetic content) shape different types of knowledge, and the formation of Human Capital. We identify two mechanisms, namely, (i) multiple degrees of abstraction from the concrete real‐world to the digital world and (ii) the conflation of ‘word form’ and ‘meaning’ in outputs of LLMs. We consider the theoretical and practical ramifications of our provocation for the development of Human Capital in the age of LLMs.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.