Algorithmic Security Vision: Diagrams of Computer Vision Politics

Ruben van de Ven, Ildikó Zonga Plájás, Cyan Bae, Francesco Ragazzi

December 2023

.... this is a demo to showcase how the chronodiagramming looks like in its interactive form. Please note that this demo of the interface is not compatible with mobile devices ...

3. Managing error: from the sublime to the risky algorithm

Our third emerging figuration concerns the place of the error. A large body of literature examines actual and speculative cases of algorithmic prediction based on self-learning systems (Azar et al., 2021). Central to these analyses is the boundary-drawing performed by such algorithmic devices, enacting (in)security by rendering their subjects as more- or less-risky others (Amicelle et al., 2015: 300; Amoore and De Goede, 2005; Aradau et al., 2008; Aradau and Blanke, 2018) based on a spectrum of individual and environmental features (Calhoun, 2023). In other words, these predictive devices conceptualize risk as something produced by, and thus external to, security technologies.

In this critical literature on algorithmic practices, practitioners working with algorithmic technologies are often critiqued for understanding software as “sublime” (e.g. Wilcox, 2017: 3). However, in our diagrams, algorithmic vision appears as a practice of managing error. The practitioners we interviewed are aware of the error-prone nature of their systems but know it will never be perfect, and see it as a key metric that needs to be acted upon.

The most prominent way in which error figures in the diagrams is in its quantified form of the true positive and false positive rates, TPR and FPR. The significance and definition of these metrics is stressed by CTO Gerwin van der Lugt (Diagram 6). In camera surveillance, the false positive rate could be described as the number of fales positive classifications relative to the number of video frames being analyzed. Upon writing down these definitions, van der Lugt corrected his initial definitions, as these definitions determine the work of his development team, the ways in which his clients — security operators — engage with the technology, and whether they perceive the output of the system as trustworthy.

Diagram 6. Gerwin van der Lugt corrects his initial definitions of the true positive and false positive rates, and stresses the importance of their precise definition.

The figuration of algorithmic security vision as inherently imprecise affects the operationalization of security practices. Van der Lugt’s example concerns whether the violence detection algorithm developed by Oddity.ai should be trained to categorize friendly fighting (stoeien) between friends as “violence” or not. In this context, van der Lugt finds it important to differentiate what counts as false positive in the algorithm’s evaluation metric from an error in the algorithm’s operationalization of a security question.

He gives two reasons to do so. First, he anticipates that the exclusion of stoeien from the category of violence would negatively impact TPR. In the iterative development of self-learning systems, the TPR and FPR, together with the true and false negative rates must perform a balancing act. Van der Lugt outlines that with their technology they aim for fewer than 100 false positives per 100 million frames per week. The FPR becomes indicative of the algorithm’s quality, as too many faulty predictions will desensitize the human operator to system alerts.

This leads to van der Lugt’s second point: He fears that the exclusion of stoeien from the violence category might cause unexpected biases in the system. For example, instead of distinguishing violence from stoeien based on people’s body movements, the algorithm might make the distinction based on their age. For van der Lugt, this would be an undesirable and hard to notice form of discrimination. In developing algorithmic (in)security, error is figured not merely as a mathematical concept but (as shown in Diagram 6) as a notion that invites pre-emption — a mitigation of probable failure — for which the developer is responsible. The algorithmic condition of security vision is figured as the pre-emption of error.

Diagram 7. By drawing errors on a timeline, van Rest calls attention to the pre-emptive nature of error in the development process of computer vision technologies.

According to critical AI scholar Matteo Pasquinelli, “machine learning is technically based on formulas for error correction” (2019: 2). Therefore, any critical engagement with such algorithmic processes needs to go beyond citing errors, “for it is precisely through these variations that the algorithm learns what to do” (Amoore, 2019: 164), pushing us to reconsider any argument based on the inaccuracy of the systems.

The example of stoeien suggests that it is not so much a question if, or how much, these algorithms err, but how these errors are anticipated and negotiated. Thus, taking error as a hallmark of machine learning we can see how practices of (in)security become shaped by the notion of mathematical error well beyond their development stages. Error figures centrally in the development, acquisition and deployment of such devices. As one respondent indicated, predictive devices are inherently erroneous, but the quantification of their error makes them amenable to "risk management.”

While much has been written about security technologies as a device for risk management, little is known about how security technologies are conceptualized as objects of risk management. What happens then in this double relation of risk? The figure of the error enters the diagrams as a mathematical concept, throughout the conversations we see its figure permeate the discourse around algorithmic security vision. By figuring algorithmic security vision through the notion of error, risk is placed at the heart of the security apparatus.

Con-figurations of algorithmic security vision: fragmenting accountability and expertise

In the previous section we explored the changing figurations of key dimensions of algorithmic security vision, in this section we examine how these figurations configure. For Suchman, working with configurations highlights “the histories and encounters through which things are figured into meaningful existence, fixing them through reiteration but also always engaged in ‘the perpetuity of coming to be’ that characterizes the biographies of objects as well as subjects” (Suchman, 2012: 50, emphasis ours) In other words, we are interested in the practices and tensions that emerge as figurations become embedded in material practices. We focus on two con-figurations that emerged in the interviews: the delegation of accountability to externally managed benchmarks, and the displacement of responsibility through the reconfiguration of the human-in-the-loop.

Delegating accountability to benchmarks

The first configuration is related to the evaluation of the error rate in the training of algorithmic vision systems: it involves datasets, benchmark institutions, and the idea of fairness as equal representation among different social groups. Literature on the ethical and political effects of algorithmic vision has notoriously focused on the distribution of errors, raising questions of ethnic and racial bias (e.g. Buolamwini and Gebru, 2018). Our interviews reflect the concerns of much of this literature as the pre-emption of error figured repeatedly in relation to the uneven distribution of error across minorities or groups. In Diagram 8, Ádám Remport draws how different visual traits have often led to different error rates. While the general error metric of an algorithmic system might seem "acceptable," it actually privileges particular groups, which is invisible when only the whole is considered. Jeroen van Rest distinguishes such errors from the inherent algorithmic imprecision in deep machine learning models, as systemic biases (Diagram 7), as they perpetuate inequalities in the society in which the product is being developed.

Diagram 8. Ádám Remport describes that facial recognition technologies are often most accurate with white male adult faces, reflecting the datasets they are trained with. The FPR is higher with people with darker skin, children, or women, which may result in false flagging and false arrests.

To mitigate these concerns and manage their risk, many of our interviewees who develop and implement these technologies, externalize the reference against which the error is measured. They turn to a benchmark run by the American National Institute of Standards and Technology (NIST), which ranks facial recognition technologies by different companies by their error metric across groups. John Riemen, who is responsible for the use of forensic facial recognition technology at the Center for Biometrics of the Dutch police, describes how their choice for software is driven by a public tender that demands a "top-10" score on the NIST benchmark. The mitigation of bias is thus outsourced to an external, and in this case foreign, institution.

We see in this outsourcing of error metrics a form of delegation that brings about a specific regime of (in)visibility. While a particular kind of algorithmic bias is rendered central to the NIST benchmark, the mobilization of this reference obfuscates questions on how that metric was achieved. That is to say, questions about training data are invisibilized, even though that data is a known site of contestation. For example, the NIST benchmark datasets are known to include faces of wounded people (Keyes, 2019). The Clearview company is known to use images scraped illegally from social media, and IBM uses a dataset that is likely in violation of European GDPR legislation (Bommasani et al., 2022: 154). Pasquinelli (2019) argued that machine learning models ultimately act as data compressors: enfolding and operationalizing imagery of which the terms of acquisition are invisibilized.

Attention to this invisibilization reveals a discrepancy between the developers and the implementers of these technologies. On the one hand, the developers we interviewed expressed concerns about how their training data is constituted to gain a maximum false positive rate/true positive rate (FPR/TPR) ratio, while showing concern for the legality of the data they use to train their algorithms. On the other hand, questions about the constitution of the dataset have been virtually non-existent in our conversations with those who implement software that relies on models trained with such data. Occasionally this knowledge was considered part of the developers' intellectual property that had to be kept a trade secret. A high score on the benchmark is enough to pass questions of fairness, legitimizing the use of the algorithmic model. Thus, while indirectly relying on the source data, it is no longer deemed relevant in the consideration of an algorithm. This illustrates well how the invisibilization of the “compressed” dataset, in Pasquinelli’s terms, into a model, with the formalization of guiding metrics into a benchmark, permits a bracketing of accountability. One does not need to know how outcomes are produced, as long as the benchmarks are in order.

The configuration of algorithmic vision’s bias across a complex network of fragmented locations and actors, from the dataset, to the algorithm, to the benchmark institution reveals the selective processes of (in)visibilization. This opens up fruitful alleys for new empirical research: What are the politics of the benchmark as a mechanism of legitimization? How does the outsourcing of assessing the error distribution impact attention to bias? How has the critique of bias been institutionalized by the security industry, resulting in the externalization of accountability, through dis-location and fragmentation?

Reconfiguring the human-in-the-loop

A second central question linked to the delegation of accountability is the configuration in which the security operator is located. The effects of delegation and fragmentation in which the mitigation of algorithmic errors is outsourced to an external party, becomes visible in the ways in which the role of the security operator is configured in relation to the institution they work for, the software’s assessment, and the affected publics.

The public critique of algorithms has often construed the human-in-the loop as one of the last lines of defense in the resistance to automated systems, able to filter and correct erroneous outcomes (Markoff, 2020). The literature in critical security studies has however problematized the representation of the security operator in algorithmic assemblages by discussing how the algorithmic predictions appear on their screen (Aradau and Blanke, 2018), and how the embodied decision making of the operator is entangled with the algorithmic assemblage (Wilcox, 2017). Moreover, the operator is often left guessing at the working of the device that provides them with information to make their decision (Møhl, 2021).

What our participants’ diagrams emphasized is how a whole spectrum of system designs emerges in response to similar questions, for example the issue of algorithmic bias. A primary difference can be found in the degree of understanding of the systems that is expected of security operators, as well as their perceived autonomy. Sometimes, the human operator is central to the system’s operation, forming the interface between the algorithmic systems and surveillance practices. Gerwin van der Lugt, developer of software at Oddity.ai that detects criminal behavior argues that “the responsibility for how to deal with the violent incidents is always [on a] human, not the algorithm. The algorithm just detects violence—that’s it—but the human needs to deal with it.”

Dirk Herzbach, chief of police at the Police Headquarters Mannheim, adds that when alerted to an incident by the system, the operator decides whether to deploy a police car. Both Herzbach and Van der Lugt figure the human-in-the-loop as having full agency and responsibility in operating the (in)security assemblage (cf. Hoijtink and Leese, 2019).

Some interviewees drew a diagram in which the operator is supposed to be aware of the ways in which the technology errs, so they can address them. Several other interviewees considered the technical expertise of the human-in-the-loop to be unimportant, even a hindrance.

Chief of police Herzbach prefers an operator to have patrol experience to assess which situations require intervention. He is concerned that knowledge about algorithmic biases might interfere with such decisions. In the case of the Moscow metro, in which a facial recognition system has been deployed to purchase tickets and open access gates, the human-in-the-loop is reconfigured as an end user who needs to be shielded from the algorithm’s operation (c.f. Lorusso, 2021). On these occasions, expertise on the technological creation of the suspect becomes fragmented.

These different figurations of the security operator are held together by the idea that the human operator is the expert of the subject of security, and is expected to make decisions independent from the information that the algorithmic system provides.

Diagram 9. Riemen explains the process of information filtering that is involved in querying the facial recognition database of the Dutch police.

Other drivers exist, however, to shield the operator from the algorithm’s functioning, challenging individual expertise and acknowledging the fallibility of human decision making. In Diagram 9, John Riemen outlines the use of facial recognition by the Dutch police. He describes how data from the police case and on the algorithmic assessment is filtered out as much as possible from the information provided to the operator. This, Riemen suggests, might reduce bias in the final decision. He adds that there should be no fewer than three humans-in-the-loop who operate independently to increase the accuracy of the algorithmic security vision.

Instead of increasing their number, there is another configuration of the human-in-the-loop that responds to the fallibility of the operator. For the Burglary-Free Neighborhood project in Rotterdam, project manager Guido Delver draws surveillance as operated by neighborhood residents, through a system that they own themselves. By involving different stakeholders, Delver hopes to counter government hegemony over the surveillance apparatus. However, residents are untrained in assessing algorithmic predictions raising new challenges. Delver illustrates a scenario in which the algorithmic signaling of a potential burglary may have dangerous consequences: “Does it invoke the wrong behavior from the citizen? [They could] go out with a bat and look for the guy who has done nothing [because] it was a false positive.” In this case, the worry is that the erroneous predictions will not be questioned. Therefore, in Delver’s project the goal was to actualize an autonomous system, “with as little interference as possible.” Human participation or “interference” in the operation are potentially harmful. Thus, figuring the operator — whether police officer or neighborhood resident — as risky, can lead to the relegation of direct human intervention.

By looking at the figurations of the operator that appear in the diagrams we see multiple and heterogeneous configurations of regulations, security companies, and professionals. In each configuration, the human-in-the-loop appears in different forms. The operator often holds the final responsibility in the ethical functioning of the system. At times they are configured as an expert in sophisticated but error-prone systems; at others they are figured as end users who are activated by the alerts generated by the system, and who need not understand how the software works and errs, or who can be left out.

These configurations remind us that there cannot be any theorization of “algorithmic security vision,” both of its empirical workings and its ethical and political consequences without close attention to the empirical contexts in which the configurations are arranged. Each organization of datasets, algorithms, benchmarks, hardware and operators has specific problems. And each contains specific politics of visibilization, invisibilization, responsibility and accountability.

A diagram of research

In this conclusion, we reflect upon a final dimension of the method of diagraming in the context of figurations and configurations: its potential as an alternative to the conventional research program.

While writing this text, indeed, the search for a coherent structure through which we could map the problems that emerged from analyzing the diagrams in a straightforward narrative proved elusive. We considered various organizational frameworks, but consistently encountered resistance from one or two sections. It became evident that our interviews yielded a rhizome of interrelated problems, creating a multitude of possible inquiries and overlapping trajectories. Some dimensions of these problems are related, but not to every problem.

If we take for example the understanding of algorithmic security vision as practices of error management as a starting point, we see how the actors we interviewed have incorporated the societal critique of algorithmic bias. This serves as a catalyst for novel strategies aimed at mitigating the repercussions of imperfect systems. The societal critique has driven the development of synthetic datasets, which promise equitable representation across diverse demographic groups. It has also been the reason for the reliance on institutionalized benchmarks to assess the impartiality of algorithms. Moreover, different configurations of the human-in-the-loop emerge, all promised to rectify algorithmic fallibility. We see a causal chain there.

But how does the question of algorithmic error relate to the shift from photographic to cinematic vision that algorithmic security vision brings about? Certainly, there are reverberations. The relegation of stable identity that we outlined, could be seen as a way to mitigate the impact of those errors. But it would be a leap to identify these questions of error as the central driver for the increased incorporation of moving images in algorithmic security vision.

However, if we take as our starting point the formidable strides in computing power and the advancements in camera technologies, we face similar problems. These developments make the analysis of movement possible while helping to elucidate the advances in the real-time analysis that are required to remove the human-in-the-loop, as trialed in the Burglary-Free Neighborhood. These developments account for the feasibility of the synthetic data generation, a computing-intense process which opens a vast horizon of possibilities for developers to detect objects or actions. Such an account, however, does not address the need for such a synthetic dataset. A focus on the computation of movement, however, would highlight how a lack of training data necessitates many of the practices described. Synthetic data is necessitated by the glaring absence of pre-existing security datasets that contain moving bodies. While facial recognition algorithms could be trained and operated on quickly repurposed photographic datasets of national identity cards or drivers’ license registries, no dataset for moving bodies has been available to be repurposed by states or corporations. This absence of training data requires programmers to stage scenes for the camera. Thus, while one issue contains echoes of the other, the network of interrelated problematizations cannot be flattened into a single narrative.

The constraints imposed by the linear structure of an academic article certainly necessitate a specific ordering of sections. Yet the different research directions we highlight form something else. The multiple figurations analyzed here generate fresh tensions when put in relation with security and political practices. What appears from the diagrams is a network of figurations in various configurations. Instead of a research program, our interviews point toward a larger research diagram of interrelated questions, which invites us to think in terms of pathways through this dynamic and evolving network of relations.

References

Ajana B (2013) Governing Through Biometrics. London: Palgrave Macmillan UK. DOI: 10.1057/9781137290755.

Amicelle A, Aradau C and Jeandesboz J (2015) Questioning security devices: Performativity, resistance, politics. Security Dialogue 46(4): 293–306. DOI: 10.1177/0967010615586964.

Amoore L (2014) Security and the incalculable. Security Dialogue 45(5). SAGE Publications Ltd: 423–439. DOI: 10.1177/0967010614539719.

Amoore L (2019) Doubt and the algorithm: On the partial accounts of machine learning. Theory, Culture & Society 36(6). SAGE Publications Ltd: 147–169. DOI: 10.1177/0263276419851846.

Amoore L (2021) The deep border. Political Geography. Elsevier: 102547.

Amoore L and De Goede M (2005) Governance, risk and dataveillance in the war on terror. Crime, Law and Social Change 43(2): 149–173. DOI: 10.1007/s10611-005-1717-8.

Andersen RS (2015) Remediating Security. 1. oplag. Ph.d.-serien / københavns universitet, institut for statskundskab. Kbh.: Københavns Universitet, Institut for Statskundskab.

Andersen RS (2018) The art of questioning lethal vision: Mosse’s infra and militarized machine vision. In: _Proceeding of EVA copenhagen 2018_, 2018.

Andrejevic M and Burdon M (2015) Defining the sensor society. Television & New Media 16(1): 19–36. DOI: 10.1177/1527476414541552.

Aradau C and Blanke T (2015) The (big) data-security assemblage: Knowledge and critique. Big Data & Society 2(2): 205395171560906. DOI: 10.1177/2053951715609066.

Aradau C and Blanke T (2018) Governing others: Anomaly and the algorithmic subject of security. European Journal of International Security 3(1). Cambridge University Press: 1–21. DOI: 10.1017/eis.2017.14.

Aradau C, Lobo-Guerrero L and Van Munster R (2008) Security, technologies of risk, and the political: Guest editors’ introduction. Security Dialogue 39(2-3): 147–154. DOI: 10.1177/0967010608089159.

Azar M, Cox G and Impett L (2021) Introduction: Ways of machine seeing. AI & SOCIETY. DOI: 10.1007/s00146-020-01124-6.

Bae G, de La Gorce M, Baltrušaitis T, et al. (2023) DigiFace-1M: 1 million digital face images for face recognition. In: 2023 IEEE Winter Conference on Applications of Computer Vision (WACV), 2023. IEEE.

Barad KM (2007) Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham: Duke University Press.

Bellanova R, Irion K, Lindskov Jacobsen K, et al. (2021) Toward a critique of algorithmic violence. International Political Sociology 15(1): 121–150. DOI: 10.1093/ips/olab003.

Bigo D (2002) Security and immigration: Toward a critique of the governmentality of unease. Alternatives 27. SAGE Publications Inc: 63–92. DOI: 10.1177/03043754020270S105.

Bigo D and Guild E (2005) Policing at a distance: Schengen visa policies. In: Controlling Frontiers. Free Movement into and Within Europe. Routledge, pp. 233–263.

Bommasani R, Hudson DA, Adeli E, et al. (2022) On the opportunities and risks of foundation models. Available at: http://arxiv.org/abs/2108.07258 (accessed 2 June 2023).

Bousquet AJ (2018) The Eye of War. Minneapolis: University of Minnesota Press.

Bucher T (2018) If...Then: Algorithmic Power and Politics. New York: Oxford University Press.

Buolamwini J and Gebru T (2018) Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research 81.

Calhoun L (2023) Latency, uncertainty, contagion: Epistemologies of risk-as-reform in crime forecasting software. Environment and Planning D: Society and Space. SAGE Publications Ltd STM: 02637758231197012. DOI: 10.1177/02637758231197012.

Carraro V (2021) Grounding the digital: A comparison of Waze’s ‘avoid dangerous areas’ feature in Jerusalem, Rio de Janeiro and the US. GeoJournal 86(3): 1121–1139. DOI: 10.1007/s10708-019-10117-y.

Dawson-Howe K (2014) A Practical Introduction to Computer Vision with OpenCV. 1st edition. Chichester, West Sussex, United Kingdon; Hoboken, NJ: Wiley.

Dijstelbloem H, van Reekum R and Schinkel W (2017) Surveillance at sea: The transactional politics of border control in the Aegean. Security Dialogue 48(3): 224–240. DOI: 10.1177/0967010617695714.

Farocki H (2004) Phantom images. Public. Available at: https://public.journals.yorku.ca/index.php/public/article/view/30354 (accessed 6 March 2023).

Fisher DXO (2018) Situating border control: Unpacking Spain’s SIVE border surveillance assemblage. Political Geography 65: 67–76. DOI: 10.1016/j.polgeo.2018.04.005.

Fourcade M and Gordon J (2020) Learning like a state: Statecraft in the digital age32.

Fourcade M and Johns F (2020) Loops, ladders and links: The recursivity of social and machine learning. Theory and Society: 1–30. DOI: 10.1007/s11186-020-09409-x.

Fraser A (2019) Curating digital geographies in an era of data colonialism. Geoforum 104. Elsevier: 193–200.

Galton F (1879) Composite portraits, made by combining those of many different persons into a single resultant figure. The Journal of the Anthropological Institute of Great Britain and Ireland 8. [Royal Anthropological Institute of Great Britain; Ireland, Wiley]: 132–144. DOI: 10.2307/2841021.

Gandy OH (2021) The Panoptic Sort: A Political Economy of Personal Information. Oxford University Press. Available at: https://books.google.com?id=JOEsEAAAQBAJ.

Gillespie T (2018) Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Illustrated edition. Yale University Press.

Goodwin C (1994) Professional vision. American Anthropologist 96(3).

Graham S (1998) Spaces of surveillant simulation: New technologies, digital representations, and material geographies. Environment and Planning D: Society and Space 16(4). SAGE Publications Sage UK: London, England: 483–504.

Graham SD (2005) Software-sorted geographies. Progress in human geography 29(5). Sage Publications Sage CA: Thousand Oaks, CA: 562–580.

Grasseni C (2004) Skilled vision. An apprenticeship in breeding aesthetics. Social Anthropology 12(1): 41–55. DOI: 10.1017/S0964028204000035.

Grasseni C (2018) Skilled vision. In: Callan H (ed.) The International Encyclopedia of Anthropology. 1st ed. Wiley, pp. 1–7. DOI: 10.1002/9781118924396.wbiea1657.

Haraway D (1988) Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies 14(3). Feminist Studies, Inc.: 575–599. DOI: 10.2307/3178066.

Hoijtink M and Leese M (2019) How (not) to talk about technology international relations and the question of agency. In: Hoijtink M and Leese M (eds) Technology and Agency in International Relations. Emerging technologies, ethics and international affairs. London ; New York: Routledge, pp. 1–24.

Hopman R and M’charek A (2020) Facing the unknown suspect: Forensic DNA phenotyping and the oscillation between the individual and the collective. BioSocieties 15(3): 438–462. DOI: 10.1057/s41292-020-00190-9.

Hunger F (2023) Unhype artificial ’intelligence’! A proposal to replace the deceiving terminology of AI. 12 April. Zenodo. DOI: 10.5281/zenodo.7524493.

Huysmans J (2022) Motioning the politics of security: The primacy of movement and the subject of security. Security Dialogue 53(3): 238–255. DOI: 10.1177/09670106211044015.

Isin E and Ruppert E (2020) The birth of sensory power: How a pandemic made it visible? Big Data & Society 7(2). SAGE Publications Ltd: 2053951720969208. DOI: 10.1177/2053951720969208.

Jasanoff S (2004) States of Knowledge: The Co-Production of Science and Social Order. Routledge Taylor & Francis Group.

Ji Z, Lee N, Frieske R, et al. (2023) Survey of hallucination in natural language generation. ACM Computing Surveys 55(12): 1–38. DOI: 10.1145/3571730.

Keyes O (2019) The gardener’s vision of data: Data science reduces people to subjects that can be mined for truth. Real Life Mag. Available at: https://reallifemag.com/the-gardeners-vision-of-data/.

Latour B (2005) Reassembling the Social: An Introduction to Actor-Network-Theory. Clarendon Lectures in Management Studies. Oxford; New York: Oxford University Press.

Leese M (2015) ‘We were taken by surprise’: Body scanners, technology adjustment, and the eradication of failure. Critical Studies on Security 3(3). Routledge: 269–282. DOI: 10.1080/21624887.2015.1124743.

Leese M (2019) Configuring warfare: Automation, control, agency. In: Hoijtink M and Leese M (eds) Technology and Agency in International Relations. Emerging technologies, ethics and international affairs. London ; New York: Routledge, pp. 42–65.

Lorusso S (2021) The user condition. Available at: https://theusercondition.computer/ (accessed 18 February 2021).

Lyon D (2003) Surveillance as Social Sorting: Privacy, Risk, and Digital Discrimination. Psychology Press. Available at: https://books.google.com?id=yCLFBfZwl08C.

Mackenzie A (2017) Machine Learners: Archaeology of a Data Practice. The MIT Press. DOI: 10.7551/mitpress/10302.001.0001.

Maguire M, Frois C and Zurawski N (eds) (2014) The Anthropology of Security: Perspectives from the Frontline of Policing, Counter-Terrorism and Border Control. Anthropology, culture and society. London: Pluto Press.

Mahony M (2021) Geographies of science and technology 1: Boundaries and crossings. Progress in Human Geography 45(3): 586–595. DOI: 10.1177/0309132520969824.

Markoff J (2020) Robots will need humans in future. The New York Times: Section B, 22 May. New York. Available at: https://www.nytimes.com/2020/05/21/technology/ben-shneiderman-automation-humans.html (accessed 31 October 2023).

McCosker A and Wilken R (2020) Automating Vision: The Social Impact of the New Camera Consciousness. 1st edition. Routledge.

Møhl P (2021) Seeing threats, sensing flesh: Human–machine ensembles at work. AI & SOCIETY 36(4): 1243–1252. DOI: 10.1007/s00146-020-01064-1.

Muller B (2010) Security, Risk and the Biometric State. Routledge. DOI: 10.4324/9780203858042.

O’Sullivan S (2016) On the diagram (and a practice of diagrammatics). In: Schneider K, Yasar B, and Lévy D (eds) Situational Diagram. New York: Dominique Lévy, pp. 13–25.

Olwig KF, Grünenberg K, Møhl P, et al. (2019) The Biometric Border World: Technologies, Bodies and Identities on the Move. 1st ed. Routledge. DOI: 10.4324/9780367808464.

Pasquinelli M (2015) Anomaly detection: The mathematization of the abnormal in the metadata society. Panel presentation.

Pasquinelli M (2019) How a machine learns and fails – a grammar of error for artificial intelligence. Available at: https://spheres-journal.org/contribution/how-a-machine-learns-and-fails-a-grammar-of-error-for-artificial-intelligence/ (accessed 13 January 2021).

Pugliese J (2010) Biometrics: Bodies, Technologies, Biopolitics. New York: Routledge. DOI: 10.4324/9780203849415.

Schurr C, Marquardt N and Militz E (2023) Intimate technologies: Towards a feminist perspective on geographies of technoscience. Progress in Human Geography. SAGE Publications Ltd: 03091325231151673. DOI: 10.1177/03091325231151673.

Soon W and Cox G (2021) Aesthetic Programming: A Handbook of Software Studies. London: Open Humanities Press. Available at: http://www.openhumanitiespress.org/books/titles/aesthetic-programming/ (accessed 9 March 2021).

Srnicek N and De Sutter L (2017) Platform Capitalism. Theory redux. Cambridge, UK ; Malden, MA: Polity.

Stevens N and Keyes O (2021) Seeing infrastructure: Race, facial recognition and the politics of data. Cultural Studies 35(4-5): 833–853. DOI: 10.1080/09502386.2021.1895252.

Suchman L (2006) Human-Machine Reconfigurations: Plans and Situated Actions. 2nd edition. Cambridge University Press.

Suchman L (2012) Configuration. In: Inventive Methods. Routledge, pp. 48–60.

Suchman L (2020) Algorithmic warfare and the reinvention of accuracy. Critical Studies on Security 8(2). Routledge: 175–187. DOI: 10.1080/21624887.2020.1760587.

Sudmann A (2021) Artificial neural networks, postdigital infrastructures and the politics of temporality. In: Volmar A and Stine K (eds) Media Infrastructures and the Politics of Digital Time. Amsterdam University Press, pp. 279–294. DOI: 10.1515/9789048550753-017.

Tazzioli M (2018) Spy, track and archive: The temporality of visibility in Eurosur and Jora. Security Dialogue 49(4): 272–288. DOI: 10.1177/0967010618769812.

Thatcher J, O’Sullivan D and Mahmoudi D (2016) Data colonialism through accumulation by dispossession: New metaphors for daily data. Environment and Planning D: Society and Space 34(6). SAGE Publications Ltd STM: 990–1006. DOI: 10.1177/0263775816633195.

Uliasz R (2020) Seeing like an algorithm: Operative images and emergent subjects. AI & SOCIETY. DOI: 10.1007/s00146-020-01067-y.

van de Ven R and Plájás IZ (2022) Inconsistent projections: Con-figuring security vision through diagramming. A Peer-Reviewed Journal About 11(1): 50–65. DOI: 10.7146/aprja.v11i1.134306.

Wilcox L (2017) Embodying algorithmic war: Gender, race, and the posthuman in drone warfare. Security Dialogue 48(1): 11–28. DOI: 10.1177/0967010616657947.

Zuboff S (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. First edition. New York: Public Affairs.


  1. The interface software and code is available at https://git.rubenvandeven.com/security_vision/svganim and https://gitlab.com/security-vision/chronodiagram

    ↩︎
  2. The interviews were conducted in several European countries: the majority in the Netherlands, but also in Belgium, Hungary and Poland. Based on an initial survey of algorithmic security vision practices in Europe we identified various roles that are involved in such practices. Being a rather small group of people, these interviewees do not serve as “illustrative representatives” (Mol & Law 2002, 16-17) of the field they work in. However, as the interviewees have different cultural and institutional affiliations, and hold different positions in working with algorithms, vision and security, they cover a wide spectrum of engagements with our research object.

    ↩︎
  3. The interviews were conducted by the first two authors, and at a later stage by Clemens Baier. The conversations were largely unstructured, but began with two basic questions. First, we asked the interviewees if they use diagrams in their daily practice. We then asked: “when we speak of ‘security vision’ we speak of the use of computer vision in a security context. Can you explain from your perspective what these concepts mean and how they come together?” After the first few interviews, we identified some recurrent themes, which we then specifically asked later interviewees to discuss.

    ↩︎
  4. Using anthropomorphizing terms such as “neural networks,” “learning” and “training” to denote algorithmic configurations and processes is suggested to hype “artificial intelligence.” While we support the need for an alternative terminology as proposed by Hunger (2023), here we preserve the language of our interviewees.

    ↩︎