An empirical study on AI acceptance and ethical/philosophical orientations

Tsukasa Tanihara & Taichi Murayama

Journal of Information Communication and Ethics in Society2026https://doi.org/10.1108/jices-07-2025-0192article
ABDC B
Weight
0.50

Abstract

Purpose The purpose of this study is to empirically identify the philosophical drivers behind public acceptance of fully automated artificial intelligence (AI) decisions. Specifically, this study tests how the strength of one’s ethical commitment to utilitarianism or deontology and the degree of mind attribution influence acceptability of AI judgments. Design/methodology/approach In September 2024, an online survey of 3,241 Japanese adults was administered. Respondents rated their acceptance of AI decisions in several scenarios. Ethical orientation was measured via a trolley problem item; belief in AI’s mind attribution by a four-point scale. ANCOVA models treated AI acceptance as the dependent variable, ethical orientation or mind attribution as fixed factors and demographic variables as covariates. Post hoc Tukey’s HSD tests compared adjusted means. Findings The mere direction of ethical orientation did not predict acceptance; instead, respondents with strong utilitarian or strong deontological orientations showed the highest acceptance. Acceptance increased monotonically with belief that AI could possess a mind; the gap between “strongly agree” and “strongly disagree” reached 0.74 points. Research limitations/implications This study modeled mind attribution with a single numeric indicator, even though philosophical discussions of AI’s moral agency and responsibility portray it as a much more complex. This study stops short of incorporating the substantial scholarship on where responsibility should be located in AI. Originality/value Prior studies emphasize demographic or psychosocial factors; this research introduces two philosophical variables – ethical orientation and mind attribution – and quantifies their independent effects. The results of this study reveal that “orientation strength” and presumed AI mind attribution shape acceptance, offering a fresh lens on debates over AI moral agency within governance frameworks.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1108/jices-07-2025-0192

Or copy a formatted citation

@article{tsukasa2026,
  title        = {{An empirical study on AI acceptance and ethical/philosophical orientations}},
  author       = {Tsukasa Tanihara & Taichi Murayama},
  journal      = {Journal of Information Communication and Ethics in Society},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1108/jices-07-2025-0192},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

An empirical study on AI acceptance and ethical/philosophical orientations

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.