How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts

Shivaang Sharma & Angela Aristidou

MIS Quarterly Executive2025https://doi.org/10.17705/2msqe.00114article
AJG 2ABDC A
Weight
0.50

Abstract

Operationalizing the responsible use of AI in data-sensitive, multi-stakeholder contexts is challenging. We studied how six AI tools were operationalized in a humanitarian crisis context, which involved aid agency decision makers, private technology firms and vulnerable populations. From the insights gained, we identify five types of “AI responsibility rifts” (AIRRs - the differences in subjective expectations, value sand perceived impacts of stakeholders when operationalizing an AI tool in data-sensitive contexts). We propose the self-assessment SHARE framework to mitigate these rifts and provide recommendations for closing the identified gaps.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.17705/2msqe.00114

Or copy a formatted citation

@article{shivaang2025,
  title        = {{How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts}},
  author       = {Shivaang Sharma & Angela Aristidou},
  journal      = {MIS Quarterly Executive},
  year         = {2025},
  doi          = {https://doi.org/https://doi.org/10.17705/2msqe.00114},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.