Folksonomies in Crowdsourcing: A Cross-Project Comparison

Alexander O. Smith et al.

Computer Supported Cooperative Work2026https://doi.org/10.1007/s10606-026-09537-5article
AJG 2ABDC B
Weight
0.50

Abstract

Members of cooperative groups can work together more effectively if they develop a shared classification schema, but distributed groups face barriers to doing so. To better understand how classifications and classification practices can emerge and support the work of distributed groups, we review the literature on folksonomies (a kind of shared classification schema) in crowdsourcing projects (one type of distributed work). The review yields three potentially productive tensions associated with the development of folksonomies in crowdsourcing projects. First, projects must establish who has the authority to decide on adopted terminology and with what consequences. Second, there can be tension if people who tag objects have different interests in tagging than those who use the tags to search for content. Finally, projects must decide when to intervene to maintain a balance between a stable vocabulary and the ability of the project to accommodate ongoing changes. We illustrate these tensions by comparing how they are handled in the photo-sharing site Flickr, the story-sharing site Archive of Our Own (AO3), the internet culture classification site Know Your Meme, and the citizen science project Gravity Spy. The comparison suggests guidelines for project managers regarding the identified tensions.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1007/s10606-026-09537-5

Or copy a formatted citation

@article{alexander2026,
  title        = {{Folksonomies in Crowdsourcing: A Cross-Project Comparison}},
  author       = {Alexander O. Smith et al.},
  journal      = {Computer Supported Cooperative Work},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1007/s10606-026-09537-5},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

Folksonomies in Crowdsourcing: A Cross-Project Comparison

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.