In artificial intelligence (AI) we (dis)trust? Navigating institutional pressures for automation and augmentation in the implementation of AI in organizations

Dimitris Giannitsas et al.

Information and Organization2026https://doi.org/10.1016/j.infoandorg.2026.100609article
AJG 3ABDC A*
Weight
0.50

Abstract

AI introduces competing demands in organizations, creating pressures to balance efficiency and standardization with contextual responsiveness and ethical judgment. Tensions between these competing demands become particularly salient when some areas of organizations push for automation while others for augmentation , as two distinct paradigms of AI implementation. Drawing on a nested case study of a European airline, we follow three AI implementations to explore how higher order properties of the institutional environment shape how actors configure trust and distrust in AI systems in response to two coexisting institutional logics: instrumental–analytic and contextual–normative . We show how these two logics stimulate different trust–distrust configurations, which in turn guide how AI is implemented and adopted within organizations. We identify two reconciliation practices that help organizational actors manage inherent tensions between these competing institutional pressures: mindful evaluation and proactive safeguarding . The research reveals how AI implementation and adoption reflect conflicts between dominant institutional logics and contributes with a novel perspective on the role of institutional logics and trust in projects of AI implementation. • AI brings competing demands to organizations for either efficiency and standardization or contextual responsiveness and ethical judgment. • Some areas of organizations push for automation while others for augmentation. • Actors configure trust and distrust in AI systems in response to two coexisting institutional logics: instrumental–analytic and contextual–normative. • These two logics stimulate different trust–distrust configurations, which then guide how AI is implemented and adopted within organizations. • Two reconciliation practices help organizational actors manage inherent tensions between these competing institutional pressures: mindful evaluation and proactive safeguarding .

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1016/j.infoandorg.2026.100609

Or copy a formatted citation

@article{dimitris2026,
  title        = {{In artificial intelligence (AI) we (dis)trust? Navigating institutional pressures for automation and augmentation in the implementation of AI in organizations}},
  author       = {Dimitris Giannitsas et al.},
  journal      = {Information and Organization},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1016/j.infoandorg.2026.100609},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

In artificial intelligence (AI) we (dis)trust? Navigating institutional pressures for automation and augmentation in the implementation of AI in organizations

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.