Actual Self-disclosure to Anthropomorphic AI Chatbots: A Contextual Privacy Calculus Approach
M Y Zhang & Hou Zhu
Abstract
Designing AI chatbots with human-like features is a key way to promote user engagement, such as self-disclosure. Prior research has shown that anthropomorphism can foster self-disclosure intentions via systematically enhancing trust and reducing privacy concerns, a mental process captured in the privacy calculus lens. Building on this prior work, we put forth a contextual privacy calculus approach to actual disclosure behavior. We identify two salient context factors in human-chatbot interactions: psychological social distance and information sensitivity and theorize their distinct roles in shaping the privacy calculus. Through an online experiment with 222 participants, we manipulated the design to induce anthropomorphism and observed participants’ actual disclosure behavior. An ANOVA test together with Hayes’s PROCESS macro analysis showed that: 1) anthropomorphism can reduce psychological social distance but may trigger the “uncanny valley” effect, 2) privacy concerns can reduce actual disclosure, but this tendency weakens under high- sensitivity conditions, 3) trust in AI chatbots may not necessarily lead to actual disclosure. These findings highlight the need for careful anthropomorphic design to avoid its downsides. We also show that actual sharing behavior follows different mechanisms than sharing intentions. We encourage future research to explore the interplay between anthropomorphic design, context factors, and actual behavior in human-chatbot interactions.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.