Agentic AI in services: orchestrating human–machine synergy for service excellence
Ankur Srivastava & Sanjay Fuloria
Abstract
Agentic artificial intelligence (AI) refers to systems capable of perceiving their environment, making contextually relevant decisions and acting autonomously to pursue defined or evolving objectives. While the notion of AI as an assemblage of rational agents is well established, as reflected in foundational works that conceptualize AI as the study of entities that map percepts to actions to maximize long-term performance (Russell and Norvig, 2016), its contemporary evolution lies in the fusion of classical agent architectures with modern foundation models. This convergence enables AI systems not only to interpret complex environments but also to formulate subgoals, design multistep plans, access external tools and application programming interfaces and collaborate with other autonomous agents with limited human supervision (Wang et al., 2025a).The emergence of agentic AI represents a profound epistemological and managerial shift in the domain of service management. Traditionally, services have been conceptualized as human-centric, relational and experiential in nature. In contrast, contemporary service ecosystems are increasingly populated by autonomous digital entities capable of sensing, reasoning, learning and acting with purpose. Earlier generations of information systems functioned primarily as deterministic tools, executing human-commanded logic. By comparison, agentic AI embodies features of agency, including context awareness, goal formation, deliberation among alternatives and ethically bounded action (Baird and Maruping, 2021). This shift necessitates a departure from longstanding managerial assumptions related to control, intentionality and accountability, redefining the role of managers from direct supervisors of technology to orchestrators of human–machine collaboration.More fundamentally, the rise of agentic AI challenges the anthropocentric premise that intentional action is uniquely human. Algorithmic agents can now learn from experience, generalize across contexts and contribute actively to value co-creation. Agency thus becomes a distributed property, shared between human and machine intelligences coexisting within sociotechnical networks. Consequently, the meaning of “service” evolves: from a transactional exchange to an interactive process, from efficiency-driven delivery to ethical engagement and from automation to coagency.Within this broader transformation, customer service, particularly service recovery, has become a critical site where the influence of technology and agentic AI is evident. Service recovery refers to the strategies and behaviors used by service providers to address failures or disruptions in customer experience. In such contexts, customer satisfaction is shaped not only by the resolution of the problem but also by the perceived empathy, fairness and responsiveness of the service agent. Recent scholarship by Lajante and Dohm (2024) suggests that while robotic service agents deliver consistent, standardized and efficient recovery responses, human agents remain superior in their ability to convey authenticity, empathy and emotional understanding. These qualities become especially salient in emotionally charged service failures, where procedural accuracy alone fails to satisfy customers’ relational expectations. In such settings, the emotional labor performed by human agents creates a more profound sense of trust and relational repair.Mandal et al. (2025) extended this argument by exploring the responses of Generation Z consumers to service robots. Their research demonstrates that younger customers are receptive to service robots when efficiency, speed and convenience are prioritized; however, they continue to prefer human agents in complex or emotionally sensitive service failures. This suggests that a blended or hybrid service model, where human judgment and machine efficiency are deployed according to situational demands, could produce optimal outcomes. In parallel, the anthropomorphism of chatbots has emerged as a significant factor in determining customer acceptance and satisfaction. Sfar et al. (2025) demonstrated that anthropomorphic design elements, such as human-like language styles, emotional expressions and conversational tone, enhance perceived trustworthiness, social presence and customer engagement. Nonetheless, they cautioned that excessive or poorly executed anthropomorphism can generate feelings of discomfort or perceived manipulation, resulting in a decline in user trust. Therefore, achieving a suitable balance between functional competence and emotional realism is crucial (Bai and Liu, 2025; Jha et al., 2026).These technological shifts unfold within the broader framework of Quality 4.0 – the digital transformation of quality management systems. Malikah et al. (2025) identified multiple pathways through which Quality 4.0 is adopted in organizations: technology-centric, data-driven and human-oriented models. They argue that the future of service-oriented quality lies in synergistically integrating AI, customer analytics and human expertise to create adaptive and resilient service ecosystems. Collectively, these developments affirm that while AI and automation significantly enhance scalability, consistency and personalization in customer service, the human capacity for empathy, ethical judgment and nuanced communication remains indispensable in shaping meaningful customer experiences.The transition from automation to autonomous agency is not confined to customer-facing functions; it is unfolding across diverse industries, signaling a broader paradigmatic reorientation. Evidence from sectors such as digital finance, manufacturing and retail demonstrates that agency, once considered an exclusively human capability, is now increasingly embedded within technological systems. In the domain of digital finance, algorithmic advisory systems have evolved beyond merely recommending financial options. Contemporary AI-powered financial advisors continuously monitor user behavior, learn from interaction histories and provide personalized guidance that actively shapes decision-making trajectories (Chang et al., 2026). Their role has shifted from informational support to participatory influence, thereby redefining how financial value is co-created between institutions and consumers. A similar transition is evident in the manufacturing sector. Intelligent production systems, empowered by machine learning, computer vision and Industrial Internet of Things infrastructures, autonomously coordinate multistep workflows. These systems dynamically adjust the balance between machine decision-making and human supervision in response to contextual uncertainty (Qin et al., 2025; Qiu et al., 2025). They do not simply execute predefined rules; instead, they interpret variable conditions, negotiate resource allocation and escalate anomalies for human intervention only when necessary.Retail ecosystems provide another compelling illustration. Generative AI, emotion-aware virtual assistants and predictive analytics allow retailers to anticipate consumer intent with increasing accuracy (Mangiò et al., 2025). These systems design hyperpersonalized service pathways, modulating recommendations, pricing and engagement strategies in real time. Such technologies blur the boundary between service provision and co-creation, as consumers unconsciously collaborate with algorithms in shaping the retail experience. Taken together, these developments underscore a pivotal insight: agency has become a distributed phenomenon operating across human, algorithmic and organizational layers of service ecosystems (Bartelheimer et al., 2025). This redistribution transforms the strategic role of technology. Whereas earlier service architectures pursued efficiency primarily through standardization and tight control mechanisms, agentic AI thrives on adaptability, probabilistic reasoning and contextual learning.Consequently, managerial orientation must also change. The task is no longer to eliminate variability through rigid standard operating procedures. Instead, managers must architect conditions under which human creativity and machine intelligence co-evolve productively. Scuotto et al. (2025) described this as the “empowerment of AI,” a deliberate transfer of selective decision rights and cognitive tasks to autonomous systems capable of acting responsibly within predefined ethical and operational boundaries. However, delegating agency to AI is not an ethically neutral act. As Novelli et al. (2023) cautioned, once systems begin to act autonomously, the consequences of their decisions can no longer be divorced from questions of fairness, transparency and accountability. Thus, performance and ethics must be managed simultaneously, not sequentially.In this sense, agentic AI represents not just a technological upgrade but a rethinking of managerial logic. Authority shifts from centralized control toward distributed cognition, where humans and machines negotiate meaning, responsibility and action. According to Baird and Maruping’s (2021) model of reciprocal delegation, humans assign decision tasks to AI, and AI systems, in turn, refer interpretive dilemmas back to humans when contextual judgment is required. Managerial work, therefore, becomes less about instruction and more about orchestration, interpretation and ethical stewardship.Because services are inherently relational, the socioemotional dimension is pivotal. Trust, empathy and presence, once seen as incompatible with automation, are increasingly simulated and, in bounded ways, enhanced through advances in natural language processing, affective computing and reinforcement learning (Pei et al., 2024; Wang et al., 2025b). Service agents can recognize emotions, adjust tone and pacing and emulate empathetic responses, blurring the boundaries between authentic emotional labor and synthetic compassion. As Hur and Shin (2025) demonstrated, robot-mediated encounters also alter stress, risk perception and helping behavior, which necessitates updated service scripts and safeguards to maintain satisfaction and well-being.Greater emotional fluency, however, heightens ethical responsibilities. When an AI appears to understand or care, the obligation to disclose its nature and limits becomes a moral requirement, not just a regulatory checkbox. Transparency underpins trust. The risks of moral abdication are not theoretical: in a virtual replication of Milgram’s obedience study, participants deferred judgment when authority was externalized (González-Franco et al., 2018; Zhang et al., 2025).Given their relational intensity and ethical sensitivity, service organizations are an ideal laboratory for this transition. Services are lived rather than merely delivered: they emerge in conversation, mutual give-and-take and emotionally charged encounters, making plain how human–machine collaboration shapes value, meaning and social life. Absent participatory design and clear disclosure, AI-mediated workplaces risk reinscribing domination through opaque outputs that are difficult to contest (Young et al., 2021). When stewarded with care, by contrast, agentic systems can widen inclusion, catalyze creativity and affirm dignity at work, aligning technological advancement with human flourishing.This Viewpoint adopts a deliberately cross-disciplinary stance, drawing on service science, information systems, organizational psychology and ethics, to clarify how agentic AI is remaking managerial practice. It advances three connected claims. First, agency in services is inherently relational and distributed, worked out among intelligent systems and social contexts rather than held exclusively by humans. Second, trust and transparency have become the bedrock of durable service relationships for customers and employees who increasingly interact through algorithmically mediated touchpoints.Third, ethical governance and empathetic design must anchor managerial practice to avoid dehumanization and to protect legitimacy. The aim is not simply to catalog technology but to interpret its human consequences. Properly understood, agentic AI is less an instrument of automation than a catalyst for transformation, reshaping the moral, emotional and cognitive foundations of service systems. The managerial challenge is to orchestrate this transformation deliberately, so that human judgment and machine intelligence can converge toward service excellence that is ethical, adaptive and inclusive.The emergence of agentic AI has transformed service systems into distributed cognitive ecosystems in which humans and algorithms collaboratively share perception, reasoning, judgment and execution. This shift demands a reconceptualization of agency, not as an attribute possessed solely by individuals or machines but as a relational process negotiated between them. Baird and Maruping (2021) conceptualized this through the lens of reciprocal delegation. In practice, humans establish objectives, constraints and ethical boundaries, while AI systems manage complexity, execute tasks and offer adaptive solutions. Crucially, AI systems do not retain unilateral control; they “delegate back” by surfacing anomalies, uncertainties or contextual ambiguities that require human interpretation. This recursive flow of interpretive work makes agency a dynamic and interactive phenomenon rather than a unidirectional transfer of control.Bartelheimer et al. (2025) extended this idea to describe hybrid intelligence ecosystems, where human intuition and algorithmic precision coalesce to produce adaptive service value. In such contexts, agency is relational and co-created. It flows across a network of actors, including employees, customers, managers and machines, rather than being hierarchically imposed.For such ecosystems to function effectively, trust becomes the foundational currency. As AI systems increasingly assume advisory or decision-making roles, trust decisions shift from human-to-human interactions toward human-to-algorithm interactions (Chang et al., 2026). Trust is shaped by perceptions of Competence (the accuracy and reliability of outputs), Benevolence (alignment with user interests) and Integrity (the ethical use of data and transparency of reasoning). Hur and Shin (2025) noted that users’ trust in service robots varies according to perceived risk, emotional safety and cultural context. Similarly, Safarov (2021) found that digital trust among older migrants is mediated by emotional security, familiarity and value alignment rather than technical utility alone (Casino, F., 2025).Explainability is essential for fostering trust. Transparent AI systems that provide comprehensible reasoning restore perceptions of fairness, legitimacy and user control (Liao and Vaughan, 2024). However, explainability alone is insufficient. Emerging research on algorithmic empathy suggests that emotionally attuned responses, when contextually appropriate and clearly bounded, strengthen engagement and trust. In this sense, empathy becomes both a design principle and a moral commitment. As authority shifts from human actors to AI systems, governance must ensure that accountability is not diluted in the same manner. Humans have a well-documented tendency to defer moral responsibility to external authorities (González-Franco et al., 2018). When such authority is embedded in AI, there is a risk that responsibility could dissipate across human–machine interfaces.Thus, ethical governance requires Structural accountability: a clear designation of who remains ultimately responsible for AI-enabled outcomes. Procedural transparency – visibility into how AI systems reason, update and act. Normative alignment – assurance that AI behavior reflects organizational values and broader ethical standards (Ye et al., 2021). When embedded as practice rather than symbolic compliance, governance becomes a strategic asset rather than a constraint. Agentic AI also reshapes organizational creativity. Evidence from gamified learning environments suggests that striking a balance between challenge, feedback and autonomy can stimulate ideation (Karthik and Sheik Manzoor, 2020). Similarly, human–AI collaboration – when built on iterative feedback loops – enhances problem-solving and innovation resilience (Verganti et al., 2020). Still, organizational narratives surrounding AI remain undecided – oscillating between visions of control, displacement, empowerment and collaboration. Mangiò et al. (2025) emphasized the role of reflexive negotiation, where leaders frame AI not as a threat or inevitability, but as a co-evolving partner that invites critical questioning, learning and adaptation.The integration of agentic AI within service ecosystems fundamentally reconfigures managerial roles, organizational structures and leadership imperatives. As decision-making authority becomes distributed between human and machine actors, leadership shifts from enforcing compliance to cultivating coherence across a network of interacting intelligences. In traditional service management, leadership involved directing human labor through predefined processes and Agentic AI this logic. When AI systems in managers are no longer primarily supervisors of instead, they become orchestrators of hybrid responsible for human creativity with machine (Baird and Maruping, 2021). The process of determining when and where AI autonomy is where human judgment must and that technological efficiency not ethical or relational Thus, the role of the from of instruction to of and ethical As AI systems increasing accuracy and organizations risk or automation the tendency to AI outputs leaders must ethical the capacity to algorithmic value and with human values and purpose. strategies algorithmic of AI decisions and ethics to use (González-Franco et al., 2018; and Liu, Trust, both and external becomes a strategic asset in AI-mediated service ecosystems. and this trust requires transparency and to understand how AI systems clear communication of or simulated emotional intelligence that and emotionally intelligent while about the nature. Trust is thus not a of technology but a of empathy and of agentic AI has profound for When AI is it risks being perceived as a of or and trust. participatory design and enhance acceptance and (Young et al., 2021). leadership in this is defined employees in design and autonomy and dignity at work and for human especially in or emotionally complex organizational boundaries, managers increasingly assume as of sociotechnical ecosystems. Agentic AI can enhance through data and predictive analytics (Ye et al., 2021). In and service contexts, AI-enabled have et al., 2025). for inclusion, especially for individuals with digital or remains for social legitimacy and moral responsibility service systems become more autonomous and the human role shifts from to interpretation and managers not merely monitor AI they with outputs and model This human ethical emotional cultural and the capacity to generate meaning under uncertainty et al., AI is thus both a technological and a moral It invites organizations to service systems on where humans and machines act not as but as in value. Trust, empathy, transparency and governance are not constraints on they are the foundations of The service of the future be solely human exclusively It be a between the an of shared intelligence by empathy, accountability and AI a significant and research for in service science, information systems, organizational behavior and While foundations for human–AI collaboration are evolving (Baird and Maruping, of how agency is negotiated and remains research must the through which decision accountability and interpretive authority are distributed across customers, employees, algorithms and managerial structures et al., 2025). and be particularly in exploring how relationships of trust and especially as AI systems and more actively in service in agentic AI is not solely a rational of it is also and emotionally While competence and as of AI (Chang et al., future research how cultural emotional and context human responses to machine on Safarov across and could in how assign and relational to AI research algorithmic empathy, the to which AI can or emotional understanding. suggests that emotionally AI can strengthen user engagement when used However, questions remain AI empathy, or can it only conditions enhance trust rather than create discomfort or ethical safeguards are to emotional and emphasized by et al. and et al. humans defer moral judgment to systems. When authority is algorithmic rather than human, how responsibility be for research of shared accountability that a balance between innovation and (Ye and Liu, work ethics and organizational is crucial for regulatory that hybrid AI reshapes the work experience. research suggests that task autonomy and AI can enhance and agency (Qin et al., while other opaque control, and dehumanization (Young et al., 2021). address this future research use and participatory to understand how employees meaning, dignity and creativity in study these complex a of research including exploring transparency and interpretive research and organizational (Mangiò et al., and and behavior within human–machine (Ye et al., the of agentic AI, particularly in to and inclusion, While AI has and risk governance through data (Ye et al., it has also ethical dilemmas and of AI systems both profound and moral risk et al., 2025). across and institutions can clarify how or AI et al., scholarship must beyond a on technological performance to a of This framework how agency is ethical responsibility is and human dignity is in hybrid service ecosystems. As et al. (2025) the challenge is not merely AI, but the human–machine in a that advances both innovation and
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.