How do AI agents influence customers’ moral choices? A social identity perspective
Yaohua Wang et al.
Abstract
Purpose This study aims to draw on social identity theory to explore the impact of service agent type (human staff vs AI agent) on customers’ moral choices. Design/methodology/approach This study conducts five experiments to validate the proposed hypotheses, involving a total of 900 Chinese participants. Findings The results suggest that in service settings, AI agents lead to more unethical customer behavior than human agents. Empathy and moral obligation play a serial mediating role in this process. When customers perceive lower power, AI agents (vs human staff) are more likely to lead customers to engage in unethical behavior. When customers perceive higher power, this effect is mitigated. Practical implications This work offers valuable insights into how to enhance customers’ group identification with AI agents and elevate their ethical standards in service-oriented environments. Originality/value This study advances the explanatory framework of social identity theory in moral decision-making contexts, enriching the psychological mechanism through which service agents influence customers’ moral choices.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.