Nudges affect the perceived trustworthiness of algorithmic recommendations in public services: explaining by learning costs
Yuan Sun et al.
Abstract
Purpose This study aims to identify the most effective explanatory strategies for building public trust in government-use AI-based algorithmic recommendations. Design/methodology/approach By comparing salient explanations and norm-based explanations across different age groups, we analyzed how these explanatory strategies reduce learning costs and enhance cognitive trust in the algorithm. Findings The study finds that both salient and norm-based explanations can reduce learning costs and enhance users’ cognitive trust in algorithms; however, norm-based explanations are particularly effective for younger users. Additionally, the study finds no significant interaction between the two types of explanations. Importantly, effective explanations can enhance both cognitive trust in the algorithm and affective trust in the government. Originality/value This research suggests that “nudges” in explanations can enhance citizens’ trust in algorithmic public services, which is significant for increasing acceptance of these algorithms.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.