Knowing Enough to Be Dangerous: The Problem of “Artificial Certainty” for Expert Authority When Using AI for Decision Making and Planning
Leonardi & Virginia Leavell
Abstract
This study examines how experts who use advanced artificial intelligence (AI) technologies that generate highly detailed and realistic representations can create what we term “artificial certainty,” which we define as the illusion that complex future outcomes are definitively knowable, even though they are inherently uncertain. Through a comparative study of two urban planning organizations using the same AI simulation tool, we show how this artificial certainty emerges from the ways process experts create and deploy AI-generated representations. The findings reveal three interconnected representational practices that shape how laypeople perceive the certainty of a representation: controlling the level of detail, shaping stakeholder engagement, and constructing the model’s meaning. We find that when process experts emphasize enhancement—amplifying technological capabilities within these practices—stakeholders mistake representations for reality, undermining expert authority. Conversely, when process experts engage in modulation—tempering how AI outputs are presented and integrated into decision making—they preserve the authority necessary to keep uncertainty alive. These findings reconceptualize process expertise as a distinct form of interpretive work that helps maintain useful levels of uncertainty in the face of growing pressures toward artificial certainty. Based on these insights, we develop a critical distinction between representations of the future versus representations for the future, offering new ways to theorize decision making under uncertainty as organizations increasingly deploy sophisticated AI systems. Funding: This work was supported by the National Science Foundation [Grants SES-1057148 and SES-2051896].
1 citation
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.16 × 0.4 = 0.06 |
| M · momentum | 0.53 × 0.15 = 0.08 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.