Authors:Luke Heemsbergen, Shiri Krebs, Radhika Gorur, Alexia Maddox Pages: 73 - 87 Abstract: This paper maps the emergence and consequences of automated Algorithmic Performance Management (APM) in the context of higher education. After reviewing the evolution of productivity management in academia, it argues that surveillance via APM shifts expectations not just about effectiveness at work but also about how work, and the good worker, come to be defined. In our paradigmatic case study of Office 365, we specify how the automated surveillance of workforce practices are deployed to redefine productivity in higher education: productive workers become good data subjects as well as producers of papers, grants, and other traditional outputs of success. Our analysis suggests performing well at work is managed in and by the platform via logics of the surveillance of wellness, time-regulation, and social connectivity to influence, manage, and control workers. We critique these automated performance measures in terms of platform capitalism, noting Office 365’s Viva Insights function as a telematic device of surveillance. The final section of the paper places these trends in Australia’s socio-legal context by showing how Viva is insufficient for considering performance given the range of practices that constitute “academic work,” including but not limited to the need for unmonitored activity. Yet, we observe that currently little can be done about Office 365’s surveillant presence given a regulatory regime that by and large excludes productivity surveillance from the scope of regulated surveillance activities. PubDate: 2024-06-16 DOI: 10.24908/ss.v22i2.15776 Issue No:Vol. 22, No. 2 (2024)
Authors:Neil Selwyn, Bronwyn Cumbo Pages: 88 - 103 Abstract: Students’ laptops are subject to growing digital forms of surveillance by their schools. Following the theoretical tradition of media “domestication,” this paper examines the incorporation of Student Activity Monitoring Software (SAMS) into the everyday routines of one Australian secondary school. Drawing on two years of fieldwork, the paper details how SAMS was broadly accepted by school staff and students as complementing—rather than challenging—the moral economy of the school. With school leaders keen to increase student surveillance without being seen to diminish teacher professionalism, the paper shows how SAMS was appropriated and objectified in ways that teachers and students perceived as preserving established classroom power relations. At the same time, school leaders could maintain core school values while also projecting an appearance of innovation and being “data-driven.” However, rather than this surveillance system being wholly “tamed,” we also show how SAMS was leading to subtle shifts within the school—not least the surrendering of governance and accountability to the software company, alongside the further entrenchment of “soft surveillance” logics into classrooms. PubDate: 2024-06-16 DOI: 10.24908/ss.v22i2.15791 Issue No:Vol. 22, No. 2 (2024)
Authors:Luke Munn Pages: 104 - 119 Abstract: “Bossware” is software that monitors workers, tracking their activity and productivity in often hidden ways. This type of software has seen a surge of interest since the start of the pandemic, as managers attempt to retain oversight of workers in remote or distributed conditions. However, “bossware” is not monolithic but highly differentiated, with each product created by specific companies, with specific affordances, for specific purposes. This article thus builds a more articulated portrait of bossware by mapping the landscape. It first defines a schema based on the “expansiveness” and “invasiveness” of this software and maps key products along these two axes. It then develops a bossware typology ranging from spyware through to soft-bossware and productivity-ware, highlighting their differences in terms of data captured, userbases, perceived legitimacy, and existing safeguards. The article concludes by offering several approaches to investigating these technical regimes and stressing bossware as a site of both power and counterpower. PubDate: 2024-06-16 DOI: 10.24908/ss.v22i2.16179 Issue No:Vol. 22, No. 2 (2024)
Authors:Elle Pearson, Rikke Bjerg Jensen, Peter Adey Pages: 120 - 137 Abstract: Predictive and data-driven policing systems continue to proliferate around the world, enticing police forces with promises of improvements in efficiency and the ability to offer various ways of addressing the future to pre-empt, predict, or prevent crime. As more of these systems become operationalised in England and Wales, this paper takes up Duarte’s (2021) observation that there is a lack of description as to what such systems actually are. This paper adapts a social network methodology to explore what is a data-driven policing system. Using a police force in England, UK, as a case study, we provide a visualisation of a data-driven policing system based on the data flows it requires to operate. The paper shows how a disparate network of affiliate organisations act as collators of specific data types that are then used in a range of policing applications. We make visible how data travels from its source through various nodes and the various potential points of translation that occur. We show, as others have argued before us, the data points used are proxies for poverty, making certain groups and sections of society highly visible to the digital system whilst other groups and crimes become less visible—and sometimes even hidden. PubDate: 2024-06-16 DOI: 10.24908/ss.v22i2.15826 Issue No:Vol. 22, No. 2 (2024)
Authors:Beáta Paragi Pages: 138 - 159 Abstract: Surveillance in the context of aid work refers to control over procedures, supplies, goods, and people that is deemed necessary for providing care. It is widely considered an inalienable, albeit criticized, part of care provision. International non-governmental organizations implementing aid projects in the Global South (hereafter INGOs or aid organizations), however, also screen individuals based on either conditional clauses in funding agreements in the context of counterterrorism or on their pursuing other organizational interests. While this opaque practice has raised increasing concerns both in humanitarian and development circles, it is much less known how screening is implemented and if it can be construed as (harmful) surveillance. Therefore, qualitative methods were used to explore a screening tool, the description of which is the core empirical part of this study, and to map INGO experiences and dilemmas with screening. As findings indicate, vendors delivering surveillance technology can help INGOs to navigate the complexity of sanctions and enforcement lists, ensure legal compliance, and demonstrate accountability towards donors, while the transparency obligation prescribed in data protection laws poses huge challenges. Furthermore, the right to be recognized, supported, assisted, and employed—either in the humanitarian or the development context—depends on how INGOs categorize individuals before screening and how they make decisions based on the results. The article contributes to earlier research by including screening in the conceptualization of (counter)surveillance in aid work. PubDate: 2024-06-16 DOI: 10.24908/ss.v22i2.15634 Issue No:Vol. 22, No. 2 (2024)
Authors:Gabriele Jacobs, Friso Van Houdt, ginger coons Pages: 160 - 178 Abstract: Technological surveillance for the sake of safeguarding public safety (e.g., cameras, sensors, mobile phones, OSINT) pervades the lives of individuals on many levels. In this article, we advance the idea that the addition of AI changes the way surveillance ecologies function and thus deserves to spawn its own concept: the surveillance AI-cology. Surveillance AI-cologies are made up of interconnected collections of disparate actors (technological, human, more-than-human, organisational, etc.), all implicated in AI-aided surveillance tasks. They contain not only the usual complexities of any technological ecosystem but also the added complexity of AI, with emergent characteristics, both technically and socially. We argue for the utility of multi-faceted perspectives in doing work within AI-cologies, and we describe (anthropologically inspired) methodology for understanding and unpacking AI surveillance ecosystems. The development of democratically controlled AI surveillance requires the systematic consideration of ethical, legal, and social aspects (ELSA) within the quintuple helix (public, private, civil society, academia, nature). We stress the relevance of clearly defining which perspectives of the quintuple helix are considered in AI surveillance, and which not, to achieve a transparent set of (ELSA) values that guide AI surveillance development and implementation. We provide an example of the way we have developed and applied (some of) these methodologies in the context of a test-site for the development and application of smart city technology, a so-called “Living Lab.” Here we take the stance of active involvement of academics as “critical friends” into complex innovation and assessment processes. Together with our conversation partners in the field, we tease out and reflect upon the (public safety) values embedded in the setup of the Living Lab we explore. We end with a call to understand surveillance AI-cologies not as a problem to be solved, but as a continuing process to be discussed among highly diverse stakeholders. PubDate: 2024-06-16 DOI: 10.24908/ss.v22i2.16104 Issue No:Vol. 22, No. 2 (2024)
Authors:Tamara Shepherd Pages: 179 - 191 Abstract: In 2020, the Office of the Privacy Commissioner of Canada (OPCC) led a joint federal-provincial investigation into privacy violations stemming from the use of facial recognition technologies. The investigation was prompted specifically by the mobilization of Clearview AI’s facial recognition software in law enforcement, including by regional police services as well as the Royal Canadian Mounted Police. Clearview AI’s technology is based on scraping social media images, which, as the investigation found, constitutes a privacy law violation according to provincial and federal private sector legislation. In response to the investigation, Clearview AI claimed that consent for scraping social media images was not required from users because the information is already public. This common fallacy of social media privacy serves as a pivot point for the integration of digital policy literacy into the OPCC’s digital literacy materials in order to consider the regulatory environment around digital media, alongside their political-economic and infrastructural components. Digital policy literacy is a model that expands what is typically an individual- or organization-level responsibility for privacy protection by considering the wider socio-technical context in which a company like Clearview can emerge. PubDate: 2024-06-16 DOI: 10.24908/ss.v22i2.16300 Issue No:Vol. 22, No. 2 (2024)
Authors:Jeehyun Jenny Lee, Chloe Jae-Kyung Ahn Pages: 192 - 204 Abstract: Through a case study of South Korean citizens’ YouTube quarantine vlogs, this study examines the cultural narratives and practices surrounding pandemic surveillance, mainly the government-mandated quarantine monitored via the quarantine mobile app. Moving beyond the dichotomous understanding of surveillance as an act of control either to be resisted or accepted, we draw on the framework of playful surveillance and surveillance imaginaries and examine how Korean citizens creatively vlog their experience in quarantine. Through a critical visual analysis of forty quarantine YouTube vlogs, we illustrate how Korean citizens build playful surveillance imaginaries, which are imaginaries about surveillance constructed through playful frames that perceive participation in surveillance as agentive, pleasurable, and relational. Their playful surveillance imaginaries introduce novel ways of perceiving the self, surveillance technologies, and others in surveillance cultures and the relations that bring them together into a mutually beneficial and caring network. However, the subversive potential of this empowering and relational mode of surveillance may be limited by Korean society’s normative understanding of care. PubDate: 2024-06-16 DOI: 10.24908/ss.v22i2.15809 Issue No:Vol. 22, No. 2 (2024)