Leveraging AI to Capture Textual and Visual Elements: Insights for HRM Research and Practice
Yin Liang et al.
Abstract
This paper advances Human Resource Management (HRM) scholarship by introducing an accessible method to analyse of both visual and textual social media content in combination. Although HRM studies increasingly mobilise social media data, most approaches remain text‐centric, overlooking the HR‐relevant cues, embedded in images, that can inform micro, meso and macro level interpretations. We propose a method that classifies latent features from images and texts by leveraging the potential of a Large Language Model, namely GPT‐4o‐mini. We illustrate the method with an example that reports a promising performance of the GPT‐4o‐mini model. We highlight the conceptual potential of our method for theory development through multimodal data, enabling multi‐level analysis of HRM phenomena, and we discuss practical applications for HR practitioners in recruitment and selection, gauging employee engagement, and assessing organisational image, alongside limitations and considerations for responsible use.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.