Wide reflective equilibrium in LLM alignment: bridging moral epistemology and AI safety

Matthew E. Brophy

Ethics and Information Technology2026https://doi.org/10.1007/s10676-026-09897-yarticle
AJG 1ABDC B
Weight
0.50

Abstract

As large language models (LLMs) become more powerful and pervasive across society, ensuring these systems are beneficial, safe, and aligned with human values is crucial. Current alignment techniques, like Constitutional AI (CAI), involve complex iterative processes. This paper argues that the Methodology of Wide Reflective Equilibrium (MWRE) – a well-established coherentist moral methodology – offers a uniquely apt framework for understanding current LLM alignment efforts. In addition, this methodology can substantively augment these processes by offering pathways for improving their dynamic revisability, procedural legitimacy, and overall ethical grounding. Together, these enhancements can help produce more robust and ethically defensible outcomes. MWRE, emphasizing the achievement of coherence between our considered moral judgments, guiding moral principles, and relevant background theories, arguably better represents the intricate reality of LLM alignment and offers a more robust path to justification than prevailing foundationalist models or simplistic input-output evaluations. While current methods like CAI bear a structural resemblance to MWRE, they often lack its crucial emphasis on dynamic, bi-directional revision of principles and the procedural legitimacy derived from such a process. While acknowledging various disanalogies (e.g., consciousness, genuine understanding in LLMs), the paper demonstrates that MWRE serves as a valuable heuristic for critically analyzing current alignment efforts and for guiding the future development of more ethically sound and justifiably aligned AI systems.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1007/s10676-026-09897-y

Or copy a formatted citation

@article{matthew2026,
  title        = {{Wide reflective equilibrium in LLM alignment: bridging moral epistemology and AI safety}},
  author       = {Matthew E. Brophy},
  journal      = {Ethics and Information Technology},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1007/s10676-026-09897-y},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

Wide reflective equilibrium in LLM alignment: bridging moral epistemology and AI safety

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.