A multimodal graph-based music auto-tagging framework: integrating social and content intelligence

Yang Huang et al.

Information Technology and Management2026https://doi.org/10.1007/s10799-026-00496-3article
AJG 1ABDC B
Weight
0.50

Abstract

With the rapid growth of music streaming platforms, effective music auto-tagging has become crucial for Music Information Retrieval (MIR) and recommendation. However, existing approaches face significant limitations: single-modality methods, which use only audio or text features, fail to capture the rich semantic diversity of music tags, while current multimodal approaches overlook critical interactive relationships beyond music content. Moreover, most studies ignore the co-occurrence dependencies among tags, which are essential for multi-label prediction. To address these challenges, we propose MuCoGraph, a novel multimodal graph-based hybrid learning framework for music auto-tagging. Our approach integrates multiple data modalities—lyrics, user comments, and audio spectrograms—with three heterogeneous graph neural networks: preference graphs that capture artist-listener interactions, group graphs that model content similarities, and tag co-occurrence graphs that learn label dependencies. The framework employs hierarchical co-attention mechanisms that enable cross-modal feature enhancement, allowing graph-based features to strengthen textual and audio representations through mutual learning. Experiments were conducted on a real-world dataset, which we integrated from multiple online platforms, demonstrating that MuCoGraph outperforms all the compared baseline methods in music auto-tagging. Notably, MuCoGraph achieves the most substantial improvements in top-12 recommendations, with relative gains of 12% in F1-score, 11.6% in NDCG, and 11.8% in MAP compared to the best baseline. This demonstrates progressively greater performance advantages as the recommendation scope increases, highlighting its enhanced ability to maintain quality across extended tag lists. These performance gains provide practical benefits for music platforms, including enhanced user engagement and more efficient content management processes. Furthermore, ablation studies demonstrate the critical contribution of each model component, particularly showing that graph-based features effectively improve both textual and audio representations through cross-modal enhancement.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1007/s10799-026-00496-3

Or copy a formatted citation

@article{yang2026,
  title        = {{A multimodal graph-based music auto-tagging framework: integrating social and content intelligence}},
  author       = {Yang Huang et al.},
  journal      = {Information Technology and Management},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1007/s10799-026-00496-3},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

A multimodal graph-based music auto-tagging framework: integrating social and content intelligence

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.