Algorithmic security vision: Diagrams of computer vision politics
Ruben van de Ven et al.
Abstract
More than ever before, security systems are using machine learning algorithms to process images and video feeds, in applications as diverse as facial recognition at the border, movement recognition in urban security settings, or emotion recognition in judicial proceedings. What is at stake in the technical and political transformations brought about by these sociotechnical developments? This article charts the development of a novel set of practices which we term ‘algorithmic security vision’ using diagramming-interviews as an exploratory method. Based on encounters with activists, computer scientists and security professionals, it identifies five interrelated shifts in security politics: The transition from a ‘photographic’ to a ‘cinematic vision’ in security; the emergence of synthetic data; the prominence of error – not as a defect, but as a central characteristic of algorithmic systems; the displacement of responsibility through reconfigurations of the human-in-the-loop; and finally, the fragmentation of accountability through the use of institutionalised benchmarks. Neither issue can be easily disentangled from the other; the study of algorithmic security vision thus unveils a rhizome of interrelated processes. As a diagram of research, algorithmic security vision invites security studies to go beyond a singular understanding of algorithmic politics and to think instead in terms of trajectories and pathways through situated algorithmic practices.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.