Like Human, Like Algorithm: Responses to Algorithmic Discrimination Among Individuals From Protected Classes

Gülen Sarial‐Abi & Verdiana Giannetti

British Journal of Management2026https://doi.org/10.1111/1467-8551.70045article
AJG 4ABDC A*
Weight
0.50

Abstract

Algorithms, commonly used in business practice, often discriminate against members of protected classes (e.g. racial minorities). Previous research findings suggest that individuals, including those from protected classes, under some circumstances, may not respond negatively to discriminatory algorithms. Other evidence suggests the opposite. Given the conflicting evidence, there is an opportunity to understand how and when protected class members respond to businesses that employ algorithms when these algorithms make predictions or decisions resulting in discrimination. Drawing on an empirical package comprising one secondary data study and four experiments, our research demonstrates that when algorithms are perceived to engage in human‐like social categorization, they elicit more negative responses from members of protected classes. This effect is observed across various algorithm features, including nonrepresentative training data, proxy classification rules and non‐statistical classification rules. The research's findings extend the literature on algorithmic discrimination and business ethics, providing suggestions to mitigate algorithmic discrimination and improve societal well‐being.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1111/1467-8551.70045

Or copy a formatted citation

@article{gülen2026,
  title        = {{Like Human, Like Algorithm: Responses to Algorithmic Discrimination Among Individuals From Protected Classes}},
  author       = {Gülen Sarial‐Abi & Verdiana Giannetti},
  journal      = {British Journal of Management},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1111/1467-8551.70045},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

Like Human, Like Algorithm: Responses to Algorithmic Discrimination Among Individuals From Protected Classes

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.