Like Human, Like Algorithm: Responses to Algorithmic Discrimination Among Individuals From Protected Classes
Gülen Sarial‐Abi & Verdiana Giannetti
Abstract
Algorithms, commonly used in business practice, often discriminate against members of protected classes (e.g. racial minorities). Previous research findings suggest that individuals, including those from protected classes, under some circumstances, may not respond negatively to discriminatory algorithms. Other evidence suggests the opposite. Given the conflicting evidence, there is an opportunity to understand how and when protected class members respond to businesses that employ algorithms when these algorithms make predictions or decisions resulting in discrimination. Drawing on an empirical package comprising one secondary data study and four experiments, our research demonstrates that when algorithms are perceived to engage in human‐like social categorization, they elicit more negative responses from members of protected classes. This effect is observed across various algorithm features, including nonrepresentative training data, proxy classification rules and non‐statistical classification rules. The research's findings extend the literature on algorithmic discrimination and business ethics, providing suggestions to mitigate algorithmic discrimination and improve societal well‐being.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.