An empirical study on AI acceptance and ethical/philosophical orientations
Tsukasa Tanihara & Taichi Murayama
Abstract
Purpose The purpose of this study is to empirically identify the philosophical drivers behind public acceptance of fully automated artificial intelligence (AI) decisions. Specifically, this study tests how the strength of one’s ethical commitment to utilitarianism or deontology and the degree of mind attribution influence acceptability of AI judgments. Design/methodology/approach In September 2024, an online survey of 3,241 Japanese adults was administered. Respondents rated their acceptance of AI decisions in several scenarios. Ethical orientation was measured via a trolley problem item; belief in AI’s mind attribution by a four-point scale. ANCOVA models treated AI acceptance as the dependent variable, ethical orientation or mind attribution as fixed factors and demographic variables as covariates. Post hoc Tukey’s HSD tests compared adjusted means. Findings The mere direction of ethical orientation did not predict acceptance; instead, respondents with strong utilitarian or strong deontological orientations showed the highest acceptance. Acceptance increased monotonically with belief that AI could possess a mind; the gap between “strongly agree” and “strongly disagree” reached 0.74 points. Research limitations/implications This study modeled mind attribution with a single numeric indicator, even though philosophical discussions of AI’s moral agency and responsibility portray it as a much more complex. This study stops short of incorporating the substantial scholarship on where responsibility should be located in AI. Originality/value Prior studies emphasize demographic or psychosocial factors; this research introduces two philosophical variables – ethical orientation and mind attribution – and quantifies their independent effects. The results of this study reveal that “orientation strength” and presumed AI mind attribution shape acceptance, offering a fresh lens on debates over AI moral agency within governance frameworks.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.