Humans incorrectly reject confident accusatory AI judgments

Riccardo Loconte et al.

Computers in Human Behavior2026https://doi.org/10.1016/j.chb.2026.109019article
AJG 2ABDC A
Weight
0.50

Abstract

Automated verbal deception detection using methods from Artificial Intelligence (AI) has been shown to outperform humans in disentangling lies from truths. Research suggests that transparency and interpretability of computational methods tend to increase human acceptance of using AI to support decisions. However, the extent to which humans accept AI judgments for deception detection remains unclear. We experimentally examined how an AI model’s accuracy (i.e., its overall performance in deception detection) and confidence (i.e., the model’s uncertainty in single-statement predictions) influence human adoption of the model’s judgments. Participants ( n =373) were presented with veracity judgments of an AI model with high or low overall accuracy and various degrees of prediction confidence. The results showed that humans followed predictions from a highly accurate model more than from a less accurate one. Interestingly, the more confident the model, the more people deviated from it, especially if the model predicted deception. We also found that human interaction with algorithmic predictions either worsened the machine’s performance or was ineffective. While this human aversion to accept highly confident algorithmic predictions was partly explained by participants’ tendency to overestimate humans’ deception detection abilities, we also discuss how truth-default theory and the social costs of accusing someone of lying help explain the findings. • Accuracy of AI models increases human trust in model’s predictions • Humans reject highly confident AI predictions of deception • Human-AI interaction either worsens the machine’s performance or is ineffective

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1016/j.chb.2026.109019

Or copy a formatted citation

@article{riccardo2026,
  title        = {{Humans incorrectly reject confident accusatory AI judgments}},
  author       = {Riccardo Loconte et al.},
  journal      = {Computers in Human Behavior},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1016/j.chb.2026.109019},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

Humans incorrectly reject confident accusatory AI judgments

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.