Folksonomies in Crowdsourcing: A Cross-Project Comparison
Alexander O. Smith et al.
Abstract
Members of cooperative groups can work together more effectively if they develop a shared classification schema, but distributed groups face barriers to doing so. To better understand how classifications and classification practices can emerge and support the work of distributed groups, we review the literature on folksonomies (a kind of shared classification schema) in crowdsourcing projects (one type of distributed work). The review yields three potentially productive tensions associated with the development of folksonomies in crowdsourcing projects. First, projects must establish who has the authority to decide on adopted terminology and with what consequences. Second, there can be tension if people who tag objects have different interests in tagging than those who use the tags to search for content. Finally, projects must decide when to intervene to maintain a balance between a stable vocabulary and the ability of the project to accommodate ongoing changes. We illustrate these tensions by comparing how they are handled in the photo-sharing site Flickr, the story-sharing site Archive of Our Own (AO3), the internet culture classification site Know Your Meme, and the citizen science project Gravity Spy. The comparison suggests guidelines for project managers regarding the identified tensions.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.