Insurance chatbot adoption and continued usage in emerging markets: impact of human-agent access
Shweta Pandey et al.
Abstract
Purpose Insurance companies are progressively adopting artificial intelligence (AI)-powered chatbots to boost operational efficiency and customer interaction. Despite these advances, adoption levels in developing economies remain modest. In the Indian context, where rapid digitalization and growing exposure to chatbot-mediated insurance services are evident, systematic empirical evidence on consumer adoption and continued usage, especially related to the availability of human-agent fallback mechanisms, remains scarce. This study addresses the identified gap by examining the key determinants influencing both initial adoption and sustained use of insurance chatbots in India, thereby contributing to a deeper understanding of digital transformation within high-growth, emerging market settings. Design/methodology/approach An extended technology acceptance model (TAM) framework incorporating trust, contextual factors (perceived risk, facilitating conditions) and the moderating role of human-agent access is tested using survey data from 245 experienced insurance-chatbot users in India and analyzed through partial least squares structural equation modeling (PLS-SEM). Findings Perceived ease of use and usefulness significantly predict adoption intentions. Consumers value convenience and multi-channel accessibility, which support ease of use. Chatbot trust plays a significant role in both initial adoption and continued usage. Although perceived risk negatively affects trust, it does not deter use, indicating a calculated, risk-tolerant mindset. Originality/value This study offers one of the first empirical examinations of how design simplicity, multi-channel accessibility, perceived risk and trust mechanisms jointly shape chatbot adoption and continued use in emerging-market insurance contexts. It challenges prevailing service design assumptions by demonstrating that access to human agents can weaken, rather than reinforce, trust-based engagement with AI-enabled chatbots in credence-driven contexts.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.