Knowledge workers’ trust and reception of generative AI’s advice in complex tasks
Alireza Amrollahi et al.
Abstract
Building on the prior literature that suggests knowledge workers are generally averse to algorithmic advice, this study explores the differences in reception of and trust in generative AI (GAI) advice compared to human advice, particularly among various reception groups engaged in complex and professional tasks, such as software development. Studies 1 and 2 explore preferences between human and GAI advice sources and assess the impact of users’ reception to GAI. The findings reveal that programmers appreciate GAI advice more than the equivalent advice from human experts. Furthermore, the reception type significantly influences advice-taking behaviour; programmers with a dominant reception of GAI exhibit greater acceptance, while those with an oppositional reception show less acceptance. In Study 3, we develop a nomological model through survey data to verify the complex relationships among technological innovativeness, various forms of trust in GAI, and advice-taking behaviour, noting variations among the different reception groups. We also conduct a complementary configurational analysis to examine how users’ trust in GAI is influenced by factors outside the main domain of study, such as task complexity, perceived security risks, and past exposure to GAI. Our research challenges the widely held belief of algorithm aversion among knowledge workers and contributes to information systems literature by highlighting the impact of the critical factors such as individual reception, past exposure, and innovativeness on knowledge workers’ advice-taking from GAI. Practically, it offers insights for organisations to develop human-centric GAI implementation strategies that embrace individual differences. • The paper explores the differences in advice taking behaviour in complex tasks when the advice comes from human or generative AI (GAI) and across various reception groups. • The study challenges the prevailing notion of algorithmic aversion by showing that programmers exhibit greater appreciation for GAI advice compared to human experts. • The study further clarifies the mechanism through which various forms of trust in GAI can impact the advice taking behaviour. • Using a configurational approach, the study identifies key factors, such as past exposure to GAI and perceived security risks, that significantly influence trust in GAI.
1 citation
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.16 × 0.4 = 0.06 |
| M · momentum | 0.53 × 0.15 = 0.08 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.