How to Demonstrate Trustworthy Use of AI in Public Services: A Case Study

Natalie Smith et al.

Information Systems Journal2026https://doi.org/10.1111/isj.70025article
AJG 4ABDC A*
Weight
0.50

Abstract

Government leaders across the globe are grappling with how to harness and integrate artificial intelligence (AI) to enhance public service delivery and efficiency. Yet, a key challenge faced is how to build and maintain the trust of stakeholders. Trust is critical for the acceptance and sustained adoption of AI technologies, as well as to gain the requisite funding, resourcing and authorization to implement AI solutions. However, inherent features of AI—its autonomous capabilities, dynamic learning, and inscrutable operating logic—create challenges for trust, particularly in public services that are subject to high expectations of accountability, transparency, and fairness. We present an in‐depth case analysis of how an Australian government department was able to deploy a solution that was widely accepted, and identified as an exemplar of trustworthy AI use. We identify six trust‐supporting approaches: benevolent customer‐centricity, radical honesty, diverse input, rigorous development and testing, human discretion in decision‐making, and aligning the authorising environment. For each approach, we explain how and why it supports trust, and then contrast that approach with a prominent, but widely distrusted application in the Australian government. We conclude with implications for public sector leaders seeking to engender trust in their use of AI.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1111/isj.70025

Or copy a formatted citation

@article{natalie2026,
  title        = {{How to Demonstrate Trustworthy Use of AI in Public Services: A Case Study}},
  author       = {Natalie Smith et al.},
  journal      = {Information Systems Journal},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1111/isj.70025},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

How to Demonstrate Trustworthy Use of AI in Public Services: A Case Study

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.