The Human Superiority Effect in Advice Taking: A Multimethod Exploration and Implications for Policy Makers and Governmental Organizations
Manhui Jin et al.
Abstract
Many policy makers and governmental organizations have started using generative artificial intelligence (AI) to provide advice to individuals. However, prior research paints an unclear picture of individuals’ receptiveness to the outputs generated by AI, relative to those from human advisers. While some studies show that individuals prefer outputs generated by humans over AI, others present an opposite pattern. To reconcile these mixed findings, this research differentiates two perspectives where relative preferences have been widely examined: (1) a bystander perspective, where consumers evaluate the content generated by human versus AI agents, and (2) a decision-maker perspective, where consumers accept recommendations made by the agents. The authors find that although there is a general trend of preferring human advice over AI advice in individual decision-making—exhibiting a “human superiority effect”—there is no significant difference between human and AI content preferences during bystander evaluations. Additionally, psychological distance constitutes an important contextual moderator explaining the relative preference for human versus AI recommendations. Specifically, when decision-making circumstances are perceived to be psychologically distant (e.g., low personal relevance), the human superiority effect is attenuated. Theoretical contributions are discussed, along with practical implications for businesses and governmental organizations.
2 citations
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.25 × 0.4 = 0.10 |
| M · momentum | 0.55 × 0.15 = 0.08 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.