Authors:Matthew Finch, Malka Older, Carissa Véliz, Annina Lux Pages: 236 - 245 Abstract: In this dialogue, we explore the use of scenarios to inform thinking about the surveillant dimensions of AI systems. The aim is neither to predict times to come nor express a desired state, but to manufacture contrasting future visions that challenge assumptions existing in the present. To consider these issues, we convened four researcher-practitioners—Carissa Véliz, Malka Older, Annina Lux, and Matthew Finch—whose work encompasses AI and privacy ethics, strategic foresight, philosophy, social science, and the writing of science fiction. PubDate: 2023-09-24 DOI: 10.24908/ss.v21i3.16101 Issue No:Vol. 21, No. 3 (2023)
Authors:Mike Zajko Pages: 246 - 258 Abstract: This article examines the “digital welfare state” historically, presently, and into the future, with a focus on what artificial intelligence means for welfare surveillance. Drawing on scholarship about the development of bureaucracy, the welfare state, and automation, as well as specific examples from the Netherlands, I argue that problems posed by artificial intelligence in public administration are often misplaced or misattributed and that the societal challenges we can expect to encounter in welfare surveillance are more likely to be historically familiar than technologically novel. New technologies do provide some new capabilities, which explains the uptake of algorithmic tools in welfare fraud investigation and the use of chatbots in assisting with welfare applications. Algorithmic systems are also increasingly subject to “audits” and regulations that mandate accountability. However, many of the key issues in the automation of the welfare state are the same as identified in scholarship that long precedes the current hype around artificial intelligence. These issues include a persistent suspicion of welfare recipients to justify surveillance as a form of fraud identification, opaque decision-making, and punitive measures directed against marginalized groups, enacting harm and reproducing inequalities. PubDate: 2023-09-24 DOI: 10.24908/ss.v21i3.16107 Issue No:Vol. 21, No. 3 (2023)
Authors:Aaron Gluck-Thaler Pages: 259 - 268 Abstract: This research note considers how scholars of surveillance might approach the historical legacies that surveillance through artificial intelligence (AI) is implicated in. Engaging with the relative lack of historical studies within the pages of Surveillance & Society, the note argues that in the context of surveillant AI the stakes of an ahistorical analysis are especially high. Bridging scholarship within the history of science with surveillance studies, the note explores how AI techniques today reanimate a longer history of how scientific knowledge production on classification has been coextensive with the maintenance and production of racial, gender, and social hierarchies. The note briefly examines one genealogy––the history of the field of pattern recognition, its relationship to state surveillance, and its understanding of identification as a problem of classification––to consider how surveillance and AI contingently converged. The note concludes by showing how such histories can help scholars of surveillance critically reassess common understandings of the consequences of AI and AI-adjacent surveillance practices used today. PubDate: 2023-09-24 DOI: 10.24908/ss.v21i3.16109 Issue No:Vol. 21, No. 3 (2023)
Authors:Elise Racine Pages: 269 - 275 Abstract: The COVID-19 pandemic has vastly accelerated the digitalization of public health practices worldwide. In doing so, it has fostered a new class of pandemic-related technological solutions, a subset of which utilize artificial intelligence for contact tracing purposes. The People’s Republic of China has not been immune from this rush to implement these novel tools. But there is a darker element to the country’s Alipay Health Code mobile application that extends beyond pandemic preparedness. With ambitions to further incorporate the app into their already vast surveillance apparatus, China is on the precipice of setting a dangerous precedent for pervasive, state-sponsored automated social control. In such a world, we may see health tools co-opted into systems that score individuals on their political fealty. As such, they have the potential to severely undercut democratic ideals by restricting the freedom to dissent necessary to uphold such values. They would do all this under the guise of promoting collective wellbeing. PubDate: 2023-09-24 DOI: 10.24908/ss.v21i3.16111 Issue No:Vol. 21, No. 3 (2023)
Authors:Mark Swartz, Kelly McElroy Pages: 276 - 281 Abstract: This paper examines the use of AI-driven surveillance technologies in higher education, with a focus on the academic surveillance of students. We begin with an introduction highlighting and exploring the issues related to these tools as used in academia, and then we walk readers through a hypothetical week in the life of a student in university, highlighting applications of AI-driven surveillance technology that are increasingly widespread in higher education in North America. We finish with a reflection on the narrative and suggest some considerations for institutions adopting these types of technologies. PubDate: 2023-09-24 DOI: 10.24908/ss.v21i3.16105 Issue No:Vol. 21, No. 3 (2023)
Authors:Janet Chan Pages: 287 - 287 Abstract: The use of artificial intelligence in facial recognition systems has been controversial. Among issues of concerns is the accuracy of such systems for recognising faces of non-white people. This work turns the debate on its head by showing six images of AI generated faces using identical prompts that include the words “Asian woman” and “facial recognition biometrics person technology” via Text 2 Dream in Deep Dream Generator. Rather than investigating the level of accuracy in facial recognition systems, it demonstrates how a particular AI software creates visual representation of “Asian women.” The experiment explores the interaction between text (prompt) and a particular generative algorithm. It raises questions about the data on which the algorithm is trained, how images are labelled/interpreted in training data, and the underlying power AI algorithms have in reproducing/changing stereotypes. Not transparent to the viewers is the role of the artist in selecting/framing prompts and “starter” images. PubDate: 2023-09-24 DOI: 10.24908/ss.v21i3.16102 Issue No:Vol. 21, No. 3 (2023)
Authors:Kirk Jalbert, Matthew Cutler, Teal Guetschow, Noa Bruhis Pages: 288 - 303 Abstract: Amendment 23 (A23) to the Northeast Multispecies Fisheries Management Plan will remake monitoring systems for the Northeast US commercial groundfish fishery. In addition to substantially increasing monitoring coverage, A23 will provide fishers with the option to utilize electronic monitoring (EM) technologies in place of human at-sea observers. Based on twenty-six interviews with representatives of the fishing industry, nongovernmental organizations, regulatory agencies, EM service providers, and other stakeholder groups, this paper examines how the fishery is planning for the adoption of EM. We focus on the differing perspectives on the value of EM as an appropriate tool for protecting the fishery, and as a tool of surveillance that may transform the lives of fishers. We find that while most stakeholders support the use of EM in the future, mistrust within the industry—based on historical regulatory failures, perceived lack of information on technical feasibility, privacy and data ownership issues, and the unknown long-term costs to vessel owners—poses significant barriers to successful adoption of these technologies. We conclude that these barriers can be overcome by investing in co-management driven EM implementations that draw on the expertise of fishers and increase their autonomy over their vessels and their use of data. This study offers critical insights into the conflicting sociotechnical imaginaries that co-produce spaces of surveillance for natural resource management, as well as provides important findings for the fishery as A23 moves into implementation phases. PubDate: 2023-09-24 DOI: 10.24908/ss.v21i3.15790 Issue No:Vol. 21, No. 3 (2023)
Authors:Anna Wilson, Jen Ross Pages: 304 - 316 Abstract: Surveillance practices have become increasingly widespread in Higher Education. Students and staff are monitored both physically and digitally, using a range of technologies and for a variety of purposes. Many technologies and systems introduced for other reasons (e.g., for resource sharing, communication, or collaborative work) offer additional surveillance capacities, either as designed-in or incidental features. These surveillance practices, whether already realised or present as possibilities, have the potential to profoundly change Higher Education both as a sector and as a process. There is thus a need for those working (and studying) in the sector to recognise and thus have the opportunity to question or resist these changes. This paper describes an attempt to use participatory speculative fiction to enable this recognition and articulation. It illustrates the power of the surveillance imaginaries that emerge from this approach to reveal deep and complex connections between surveillance, anonymity, knowledge, and power. PubDate: 2023-09-24 DOI: 10.24908/ss.v21i3.16025 Issue No:Vol. 21, No. 3 (2023)
Authors:Holly Blackmore, Sarah Logan, Janet Chan, Lyria Bennett Moses Pages: 317 - 333 Abstract: The widespread availability of personal data on the internet has given rise to significant concerns about the power and reach of state and corporate surveillance of the population. Researchers have suggested that ordinary citizens generally lack knowledge and control over online personal data and this has led to a sense of resignation in relation to such surveillance. This paper conceptualises public attitudes towards state surveillance within Jasanoff’s (2015) “sociotechnical imaginaries” framework and draws on an Australian survey to examine the complexity and contradictory nature of these attitudes in response to hypothetical use cases. Our study provides estimates of the prevalence of competing sociotechnical imaginaries, ranging from sizeable support for the dominant vision that surveillance can prevent/pre-empt crime/terrorism, to smaller but not insignificant support for either a dystopian or an ambivalent vision recognising the risks of such surveillance. Our results also demonstrate how sociotechnical imaginaries vary by demographics, political orientation, and perception of both citizen-state relations and the effectiveness of state surveillance practices. PubDate: 2023-09-24 DOI: 10.24908/ss.v21i3.14894 Issue No:Vol. 21, No. 3 (2023)