A Machine Learning Toolkit for Selecting Studies and Topics in Systematic Literature Reviews

Andrea Simonetti et al.

Organizational Research Methods2025https://doi.org/10.1177/10944281251341571article
AJG 4ABDC A*
Weight
0.41

Abstract

Scholars conduct systematic literature reviews to summarize knowledge and identify gaps in understanding. Machine learning can assist researchers in carrying out these studies. This paper introduces a machine learning toolkit that employs Network Analysis and Natural Language Processing methods to extract textual features and categorize academic papers. The toolkit comprises two algorithms that enable researchers to: (a) select relevant studies for a given theme; and (b) identify the main topics within that theme. We demonstrate the effectiveness of our toolkit by analyzing three streams of literature: cobranding, coopetition, and the psychological resilience of entrepreneurs. By comparing the results obtained through our toolkit with previously published literature reviews, we highlight its advantages in enhancing transparency, coherence, and comprehensiveness in literature reviews. We also provide quantitative evidence about the toolkit's efficacy in addressing the challenges inherent in conducting a literature review, as compared with state-of-the-art Natural Language Processing methods. Finally, we discuss the critical role of researchers in implementing and overseeing a literature review aided by our toolkit.

2 citations

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1177/10944281251341571

Or copy a formatted citation

@article{andrea2025,
  title        = {{A Machine Learning Toolkit for Selecting Studies and Topics in Systematic Literature Reviews}},
  author       = {Andrea Simonetti et al.},
  journal      = {Organizational Research Methods},
  year         = {2025},
  doi          = {https://doi.org/https://doi.org/10.1177/10944281251341571},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

A Machine Learning Toolkit for Selecting Studies and Topics in Systematic Literature Reviews

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.41

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.25 × 0.4 = 0.10
M · momentum0.55 × 0.15 = 0.08
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.