Knowing Enough to Be Dangerous: The Problem of “Artificial Certainty” for Expert Authority When Using AI for Decision Making and Planning

Leonardi & Virginia Leavell

Organization Science2026https://doi.org/10.1287/orsc.2023.18224article
FT50UTD24AJG 4*ABDC A*
Weight
0.37

Abstract

This study examines how experts who use advanced artificial intelligence (AI) technologies that generate highly detailed and realistic representations can create what we term “artificial certainty,” which we define as the illusion that complex future outcomes are definitively knowable, even though they are inherently uncertain. Through a comparative study of two urban planning organizations using the same AI simulation tool, we show how this artificial certainty emerges from the ways process experts create and deploy AI-generated representations. The findings reveal three interconnected representational practices that shape how laypeople perceive the certainty of a representation: controlling the level of detail, shaping stakeholder engagement, and constructing the model’s meaning. We find that when process experts emphasize enhancement—amplifying technological capabilities within these practices—stakeholders mistake representations for reality, undermining expert authority. Conversely, when process experts engage in modulation—tempering how AI outputs are presented and integrated into decision making—they preserve the authority necessary to keep uncertainty alive. These findings reconceptualize process expertise as a distinct form of interpretive work that helps maintain useful levels of uncertainty in the face of growing pressures toward artificial certainty. Based on these insights, we develop a critical distinction between representations of the future versus representations for the future, offering new ways to theorize decision making under uncertainty as organizations increasingly deploy sophisticated AI systems. Funding: This work was supported by the National Science Foundation [Grants SES-1057148 and SES-2051896].

1 citation

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1287/orsc.2023.18224

Or copy a formatted citation

@article{leonardi2026,
  title        = {{Knowing Enough to Be Dangerous: The Problem of “Artificial Certainty” for Expert Authority When Using AI for Decision Making and Planning}},
  author       = {Leonardi & Virginia Leavell},
  journal      = {Organization Science},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1287/orsc.2023.18224},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

Knowing Enough to Be Dangerous: The Problem of “Artificial Certainty” for Expert Authority When Using AI for Decision Making and Planning

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.37

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.16 × 0.4 = 0.06
M · momentum0.53 × 0.15 = 0.08
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.