Guardrails for Human-AI Ecologies: Norm-Based Coordination and Design for Predictability
Thomas Grisold et al.
Abstract
Human-AI ecologies involve human and AI-based agents that coordinate their interactions in part by following social norms. Social norms, therefore, are important for establishing the guardrails that ensure desirable interactions in a way that is consistent with essential values, such as human safety. Managing human-AI ecologies requires specifying norms to enable coordination in known situations but also allowing for the emergence of norms to enable coordination in unspecified, unstructured situations. We integrate predictive processing theory and social norm theory to explain how existing norms are enacted and reinforced based on agents’ predictive models and how new norms emerge as agents update their predictive models in response to prediction errors in uncertain coordination scenarios. Rooted in this perspective, we develop a design theory that emphasizes design for predictability and propose a set of design principles for managers and developers to encode norms to evolve in human-AI ecologies, monitor outcomes, and intervene when necessary.
2 citations
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.25 × 0.4 = 0.10 |
| M · momentum | 0.55 × 0.15 = 0.08 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.