FAIR: A Design Theory for Artificial Intelligence Fairness

Arun Rai et al.

MIS Quarterly2026https://doi.org/10.25300/misq/2026/17971article
FT50UTD24AJG 4*ABDC A*
Weight
0.50

Abstract

Artificial intelligence (AI)-automated decision systems encounter persistent, interdependent, and dynamic fairness tensions that traditional one-off interventions cannot resolve. Because these tensions persist due to interdependence and dynamic interaction, organizations require both a theory of the problem to explain their persistence and a theory of the solution to prescribe how they can be managed. Our design theory, FAIR (Fairness Adaptation through AI-augmented Responsiveness), provides a theory of the problem by reframing AI fairness as a sociotechnical paradox constituted within AI artifacts that automate decision tasks, through interdependent organizational, technical, and governance choices and their interaction with regulatory mandates and societal norms. Synthesizing four fairness perspectives (Ethics, Organizational Justice, Economic Fairness, and Rawlsian Justice), we identify three metatheoretical dimensions (principles, goals, foci) and show that the interdependence within and among these dimensions is the root, endogenous source that constitutes paradoxical fairness tensions. Building on this diagnosis, FAIR provides a theory of the solution by specifying an organizational capability grounded in three design foundations. First, the paradox lens motivates iterative adaptive cycles (Surfacing and Resolving) to continually surface and resolve AI fairness tensions. Second, design science in information systems and computer science distinguishes AI artifacts (the “what”) from the actors (the “who”) responsible for adapting them, establishing the basis for complementary human–AI agent collaboration in the adaptive cycles: AI agents execute monitoring to surface and refinement to resolve tensions, whereas human agents specify objectives, adjudicate trade-offs, and exercise contextual judgment and oversight. Third, the managing-with-AI literature informs how this human–AI agent collaboration should be governed. These foundations yield two reinforcing mechanisms: (i) artifact-level adaptation, achieved through structured human–AI agent collaboration, within and across the layers of the AI decision pipeline—Representation (data), Learning (model), and Calibration (decision); and (ii) portfolio-level, risk-tiered federated governance that structures how human–AI agent collaboration scales across tasks and artifacts, balancing process standardization with configuration choices and human control with AI autonomy based on task risk. Enabled by organizational “fairness complements”—namely, human skills to work with AI agents and structured stakeholder feedback—this sociotechnical design provides organizations with a sustained capability to harmonize global coherence and local flexibility in the responsive adaptation of AI fairness.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.25300/misq/2026/17971

Or copy a formatted citation

@article{arun2026,
  title        = {{FAIR: A Design Theory for Artificial Intelligence Fairness}},
  author       = {Arun Rai et al.},
  journal      = {MIS Quarterly},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.25300/misq/2026/17971},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

FAIR: A Design Theory for Artificial Intelligence Fairness

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.