Decision Threshold Setting in Binary Classification Problems—A Behavioral Lens
Patrick Moder et al.
Abstract
When binary classification models are wrong, managers face misclassification costs. Although false positive outcomes imply unnecessary mitigation efforts, false negative outcomes imply overlooking the class of interest. Humans calibrate these ai models supporting operational systems by adjusting the decision threshold that translates prediction probability into either class. Results of our controlled laboratory experiment show that, despite all relevant information being available, decision makers systematically deviate from the optimal cost‐efficient threshold. We observe a significant interaction effect of class and cost imbalance on this deviation, which increases in high‐stakes settings where more extreme thresholds are optimal. When unit costs are different, we find that participants anchor on the threshold where expected misclassification costs for false alarms and missed hits are equal, whereas mean anchoring cannot explain the pull‐to‐center behavior sufficiently. Surprisingly, we confirm that this impulse balance equilibrium also serves as attractive anchor in our setting, where decisions are made ex ante without loss aversion. To debias decision makers, simulated responses with behavior‐aware costs show that subjects are nudged to make choices closer to the optimum. Managers should be aware of this boundedly rational behavior and complementary debiasing techniques, as sub‐optimal threshold setting results in 53% higher misclassification costs, on average.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.