Service recovery by AI or human agents: Do failure and strategy context matter?
Andreas Fürst et al.
Abstract
Purpose Companies must understand consumer responses to AI-provided services to ensure their effectiveness. This is especially important for critical moments of truth, such as service recovery situations. In this article, we examine consumer preferences for AI versus human service recovery depending on the recovery situation: (1) locus of failure (customer vs company failure); (2) type of symbolic recovery (explanation vs apology); and (3) type of utilitarian recovery (monetary vs functional redress). Design/methodology/approach Three experimental studies were conducted using video-based scenarios that simulated customer chat conversations in financial services and healthcare contexts. Findings Results show that customers favor AI over human agents in cases of customer failures, while they prefer human agents in cases of company failures. Moreover, customers favor AI agents when given an explanation of the failure or monetary redress, whereas they prefer human agents when receiving an apology for the failure or functional redress. Differences in perceived trustworthiness of AI versus human agents, including their perceived competence, benevolence, and integrity, in these contexts are the underlying psychological process that explains our findings. Originality/value This article reveals novel insights into the effectiveness of AI versus human service recovery as a function of service failure and strategy context. Our findings demonstrate the need to align the type of service recovery agent with the specific type of service failure and recovery strategy to maximize customer satisfaction and, in turn, loyalty.
16 citations
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.64 × 0.4 = 0.26 |
| M · momentum | 0.90 × 0.15 = 0.14 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.