The limits of AI for authoritarian control
Eddie Yang
Abstract
An emerging literature suggests that artificial intelligence (AI) can greatly enhance autocrats' repressive capabilities. This paper argues that while AI presents a powerful new tool for authoritarian control, its effectiveness is constrained by the very repressive institutions it is designed to serve. This constraint stems from what I term the “authoritarian data problem”: citizens' strategic behavior under repression diminishes the amount of useful information in the data for training AI. The more repression there is, the less information exists in AI's training data, and the worse the AI performs. I illustrate this argument using an AI experiment and censorship data in China. I show that AI's accuracy in censorship decreases with increasing repression, especially during times of political crisis. I further show that this problem cannot be easily fixed with more data. Ironically, international data—especially data from less repressive settings—can help improve AI's ability to censor.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.