Rethinking Artificial Intelligence and Ethics: Uncovering Unexamined Assumptions and Assessing Their Implications

Chenxi Ma et al.

Information Systems Journal2026https://doi.org/10.1111/isj.70042article
AJG 4ABDC A*
Weight
0.50

Abstract

Although the rapid advancement of artificial intelligence (AI) has introduced significant ethical challenges, many studies have assumed a consensus on moral actions and outcomes, often overlooking the normative uncertainty inherent in AI development and use. Because the debate over which actions are morally appropriate in AI development and use, and in human–AI interaction, has intensified with AI's increasing autonomy, inscrutability, and learning capacity, this normative uncertainty warrants careful examination. Existing literature reviews have tended to reinforce the assumption of low normative uncertainty by emphasizing the similarity of the studies they examine. In this study, we conduct a problematizing review of empirical research on the ethics of AI published from 2010 to 2025. Drawing on two metaethical debates—the debate over the development of moral judgements (rationalist vs. non‐rationalist views) and the debate over their dynamics (absolutist vs. contextualist views)—we analyse how current research constructs moral judgements when studying AI and its interactions with humans. We identify two dominant field‐level ethical assumptions: that moral judgements about AI rely primarily on rational deliberation and that such judgements remain static across contexts. These assumptions are shared across three research domains: ethical AI development and governance, ethical evaluation of AI, and joint human–AI agency. Moreover, the assumptions shape the framing of research questions, the selection of research methods, and the conceptualization and measurement of constructs and relationships. By making these assumptions explicit, our study creates opportunities for theoretical inquiry into normative uncertainty. We propose five research approaches for examining rational, nonrational and context‐sensitive moral processes, offering scholars in the Information Systems discipline new pathways for theorizing ethical AI.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1111/isj.70042

Or copy a formatted citation

@article{chenxi2026,
  title        = {{Rethinking Artificial Intelligence and Ethics: Uncovering Unexamined Assumptions and Assessing Their Implications}},
  author       = {Chenxi Ma et al.},
  journal      = {Information Systems Journal},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1111/isj.70042},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

Rethinking Artificial Intelligence and Ethics: Uncovering Unexamined Assumptions and Assessing Their Implications

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.