Multimodal user-generated content analysis for digital cultural tourism: An explainable machine learning approach to tourist satisfaction
Hongyu Zhang et al.
Abstract
Digital cultural tourism is rapidly expanding, yet relevant research on tourist satisfaction predominantly relies on questionnaire surveys or unimodal textual analysis, failing to capture the comprehensive and multidimensional evidence embedded in multimodal User-Generated Content. To effectively leverage multimodal data, this study proposes a multimodal analytical framework for tourist satisfaction applied to four digital cultural tourism projects in Changsha. By integrating topic modeling, sentiment analysis, image recognition, and interpretable machine learning techniques, the analysis reveals the complex, asymmetric effects of textual drivers and the distinct impacts of visual features on satisfaction. This study deepens the understanding of mechanisms of tourist satisfaction in digital tourism contexts and provides data-driven insights for destination managers. • This work reveals drivers of satisfaction in digital cultural tourism using multimodal UGC. • Multimodal features show complex asymmetric effects on tourist satisfaction. • The quality of the core experience has a greater impact on satisfaction than content richness. • Deepens understanding of tourist satisfaction's internal logic in digital cultural tourism.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.