Trust the Explanation or my Expectation? Effects of Output Accuracy and Explanations on Expectation Violations and Trust in AI-Supported Decisions

Tim Hunsicker et al.

International Journal of Human-Computer Studies2026https://doi.org/10.1016/j.ijhcs.2026.103775article
AJG 2ABDC B
Weight
0.50

Abstract

• Inaccurate AI outputs led to expectation violations. • Expectation violations mediated the effects of AI output accuracy on trust. • Explanations did not moderate the link between accuracy and expectation violations. • For inaccurate AI outputs, explanations led to more trusting behavior. Systems based on Artificial Intelligence (AI) increasingly support decision-making, but their outputs may be inaccurate. Prior research has suggested that explanations might help detect inaccuracies, aiding successful human-AI interaction. This study investigates how the accuracy of system outputs influences users’ trust, trusting behavior, and trustworthiness perceptions, the role of expectation violations in this process, and how explanations for the system outputs influence these effects. In an online study with a 2(explanation vs. no explanation) × 2(accurate vs. inaccurate outputs) between-within design, 218 participants evaluated six job applicants. They received CVs and algorithmic evaluations of applicants’ suitability. For three applicants, outputs were accurate; for the other three, outputs reflected a 40% lower suitability than their true suitability. Half of the participants received explanations. Accurate outputs led to higher trustworthiness, trust, and trusting behavior than inaccurate outputs. Expectation violation fully mediated how accuracy affected trust and trustworthiness, and partially how accuracy influenced trusting behavior. Moreover, there was a significant interaction between explanations and output accuracy concerning trusting behavior: when outputs were accurate, explanations had little effect on trusting behavior; however, when outputs were inaccurate, explanations led to stronger trusting behavior, as participants less strongly deviated from the inaccurate outputs. We conclude that users are able to deviate from inaccurate outputs, and we highlight the importance of expectation violations in this regard. However, our findings also show possible detrimental effects of explanations as they can increase the decisional weight of inaccurate outputs instead of facilitating the detection of inaccuracies.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1016/j.ijhcs.2026.103775

Or copy a formatted citation

@article{tim2026,
  title        = {{Trust the Explanation or my Expectation? Effects of Output Accuracy and Explanations on Expectation Violations and Trust in AI-Supported Decisions}},
  author       = {Tim Hunsicker et al.},
  journal      = {International Journal of Human-Computer Studies},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1016/j.ijhcs.2026.103775},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

Trust the Explanation or my Expectation? Effects of Output Accuracy and Explanations on Expectation Violations and Trust in AI-Supported Decisions

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.