Product Safety in the Age of AI: Autonomy, R&D, and Liability
Y Chen & Xinyu Hua
Abstract
We study optimal liability for AI-powered products. Like human users, artificial intelligence (AI) can cause product failures that harm third parties. Additionally, it may introduce extreme risks of large-scale harm that renders full liability impractical. Raising AI liability for ordinary loss above actual harm can decrease excessive autonomy and increase social welfare, even when it negatively impacts R&D efforts. A well-designed liability rule implements efficient levels of autonomy and balanced R&D that reduces AI’s general risk. However, under targeted R&D to reduce AI’s extreme risk, full efficiency cannot be achieved with liability, and regulations limiting such risk can perform better.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.