How does existentially dangerous technology get adopted and then locked in? The case of the atomic bomb offers a cautionary tale. In the long run, reliance on nuclear weapons is a recipe for catastrophe. Yet their perceived ability to reduce the frequency of war in the short term inhibits efforts to reform the international status quo. Drawing on the pioneering work of David Collingridge and Nathan Sears, this paper argues that nuclear deterrence became locked in for several reasons: initial disagreement about the threat it posed, the threat’s declining salience as time wore on and serial procrastination in addressing it. Unfortunately, the same is likely with any technology that involves low-frequency, high-impact risks, including solar geoengineering and possibly artificial intelligence. At worst, it can convert catastrophic risks to existential ones, while rendering them politically intractable.