Rethinking Artificial Intelligence and Ethics: Uncovering Unexamined Assumptions and Assessing Their Implications
Chenxi Ma et al.
Abstract
Although the rapid advancement of artificial intelligence (AI) has introduced significant ethical challenges, many studies have assumed a consensus on moral actions and outcomes, often overlooking the normative uncertainty inherent in AI development and use. Because the debate over which actions are morally appropriate in AI development and use, and in human–AI interaction, has intensified with AI's increasing autonomy, inscrutability, and learning capacity, this normative uncertainty warrants careful examination. Existing literature reviews have tended to reinforce the assumption of low normative uncertainty by emphasizing the similarity of the studies they examine. In this study, we conduct a problematizing review of empirical research on the ethics of AI published from 2010 to 2025. Drawing on two metaethical debates—the debate over the development of moral judgements (rationalist vs. non‐rationalist views) and the debate over their dynamics (absolutist vs. contextualist views)—we analyse how current research constructs moral judgements when studying AI and its interactions with humans. We identify two dominant field‐level ethical assumptions: that moral judgements about AI rely primarily on rational deliberation and that such judgements remain static across contexts. These assumptions are shared across three research domains: ethical AI development and governance, ethical evaluation of AI, and joint human–AI agency. Moreover, the assumptions shape the framing of research questions, the selection of research methods, and the conceptualization and measurement of constructs and relationships. By making these assumptions explicit, our study creates opportunities for theoretical inquiry into normative uncertainty. We propose five research approaches for examining rational, nonrational and context‐sensitive moral processes, offering scholars in the Information Systems discipline new pathways for theorizing ethical AI.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.