From Bytes to Biases: Investigating the Cultural Self-Perception of Large Language Models
Wolfgang Messner et al.
Abstract
Large language models (LLMs) are able to engage in natural-sounding conversations with humans, showcasing unprecedented capabilities for information retrieval and automated decision support. They have disrupted human–technology interaction and the way businesses operate. However, technologies based on generative artificial intelligence are known to hallucinate, misinform, and display biases introduced by the massive datasets on which they are trained. Existing research indicates that humans may unconsciously internalize these biases, which can persist even after they stop using the programs. In this study, the authors explore the cultural self-perception of LLMs by prompting ChatGPT (OpenAI) and Bard (Google) with value questions derived from the GLOBE (Global Leadership and Organizational Behavior Effectiveness) project. The findings reveal that LLMs’ cultural self-perception is most closely aligned with the values of English-speaking countries and countries characterized by economic competitiveness. It is crucial for all members of society to understand how LLMs function and to recognize their potential biases. If left unchecked, the “black-box” nature of AI could reinforce human biases, leading to the inadvertent creation and training of even more biased models.
7 citations
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.47 × 0.4 = 0.19 |
| M · momentum | 0.68 × 0.15 = 0.10 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.