Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The following paper offers a political and philosophical reading of ethically informed technological design practices to critically tackle the implicit regulative ideal in the design of social media as a means to digitally represent the liberal public sphere. The paper proposes that, when it comes to the case of social media platforms, understood along with the machine learning algorithms embedded in them as algorithmic technologies, ethically informed design has an implicit conception of democracy that parallels that of Jürgen Habermas’ procedural democracy (Habermas, J. (1994). THREE NORMATIVE MODELS OF DEMOCRACY. Jurgen Habermas, 1(1).). That is, that democratic practices are encodable as procedures that produce valid discussion forums. Opposed to this, this paper suggests a turn to philosopher Jacques Rancière’s conception of politics as a guiding attitude towards technological design. This is done by, on the one side, using Rancière’s notions of “disagreement” and “distribution of the sensible” as the political starting point for the practice of design of algorithmic technologies. And, on the other, inspired by Karen Barad’s theories on agential realism (Barad, K. (2003). Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter. Signs: Journal of Women in Culture and Society, 28(3), 801–831. https://doi.org/10.1086/345321), by putting forward a political ontology of algorithmic technologies that reconceptualizes them in terms of how they assemble open-ended practices between human bodies and technological devices. PubDate: 2024-08-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This article critically addresses current debates on the digital transformation of the public sphere. It responds to two contrasting responses to this transformation: the school of destruction, which expresses pessimism about the design of social media, and the school of restoration, which advocates for the redesign of social media to align with normative conceptions of the public sphere. However, so far these responses have omitted an explicit philosophical reflection on the relationship between politics, technology and design. After tracing back the current discourse on politics and technology to the Platonic tradition of political thought, I propose to re-arrange the relationship between poiesis, praxis and theoria assumed in this tradition. By connecting Arendt’s phenomenology of the political to postphenomenology and Derrida’s notion of ‘artifactuality’, this article proposes a renewed approach to think the political implications of technological change consistent with the ‘empirical turn’ in philosophy of technology. This approach unfolds in two moves: first, it examines how the design (poiesis) of new technological conditions makes space and time for certain kinds of events to become public; second, it takes the praxis emerging in response to these new conditions as a starting point for re-theorizing the political in specific mediated contexts. The article concludes by advocating for a ‘practical turn’ in political theory of technology, emphasizing the importance of engaging with design practices and artistic practices to refine foundational concepts in political theory. PubDate: 2024-08-05
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract More and more, we face AI-based products and services. Using these services often requires our explicit consent, e.g., by agreeing to the services’ Terms and Conditions clause. Current advances introduce the ability of AI to evolve and change its own modus operandi over time in such a way that we cannot know, at the moment of consent, what it is in the future to which we are now agreeing. Therefore, informed consent is impossible regarding certain kinds of AI. Call this the problem of radical ignorance. Interestingly, radical ignorance exists in consent contexts other than AI, where it seems that individuals can provide informed consent. The article argues that radical ignorance can undermine informed consent in some contexts but not others because, under certain institutional, autonomy-protecting conditions, consent can be valid without being (perfectly) informed. By understanding these institutional conditions, we can formulate practical solutions to foster valid, albeit imperfectly informed consent across various decision contexts and within different institutions. PubDate: 2024-08-05
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Machine Learning (ML) is revolutionizing the functioning of our societies and reshaping much of the economic tissue underlying them. The deep integration of ML into the fabric of our lives has changed to way we work and communicate and how we relate to each other. In this Topical Collection we reflect on the reach and impact of this AI (ML-driven) revolution in our society, critically analyzing some of the most important ethical, epistemological, scientific, and sociological issues underlying it. PubDate: 2024-07-31
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In “Datafeudalism: The Domination of Modern Societies by Big Tech Companies” (Saura García in Phil Technol 37(3):1–18, 2024a) I analysed the concept of datafeudalism and its implications for the proper functioning of democracy. In this article, I put forward the hypothesis that big digital companies are exercising domination over the current social context and its different functional spheres, such as politics and democracy, and critique the negative implications that datafeudalism is having for the proper functioning of modern democratic systems. In this regard, Wörsdörfer (Phil Technol 37(3):1–5, 2024) has offered a critical commentary on the content in question. In particular, he focuses his discourse on the concept of the digital panopticon and its link with datafeudalism. The following replay present a reasoned response to these questions. PubDate: 2024-07-29
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this article, we use the account of the “inhuman” that is developed in the work of the French philosopher Jean-François Lyotard to develop a critique of human-centred AI. We argue that Lyotard’s philosophy not only provides resources for a negative critique of human-centred AI discourse, but also contains inspiration for a more constructive account of how the discourse around human-centred AI can take a broader view of the human that includes key dimensions of Lyotard’s inhuman, namely performativity, vulnerability, and novelty. We link these dimensions to questions of epistemic injustice and social recognition. In this way, we hope to develop a proposal for what a richer and more meaningful understanding of human-centred AI might look like. PubDate: 2024-07-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper replies to “Can AI Believe'”—an insightful commentary on “Can AI Know'” Addressing the substantive concern, this paper argues that AI systems can possess knowledge-conducive dispositional beliefs, rather than mere dispositions to believe, akin to certain human perceptual and manipulated beliefs. In response to the dialectical concern, it defends the appropriateness of the original dilemmatic reasoning. The commentators’ claim that proponents of innate knowledge deny knowledge’s need for experience is critically examined. PubDate: 2024-07-20
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract With the recent renewed interest in AI, the field has made substantial advancements, particularly in generative systems. Increased computational power and the availability of very large datasets has enabled systems such as ChatGPT to effectively replicate aspects of human social interactions, such as verbal communication, thus bringing about profound changes in society. In this paper, we explain that the arrival of generative AI systems marks a shift from ‘interacting through’ to ‘interacting with’ technologies and calls for a reconceptualization of socio-technical systems as we currently understand them. We dub this new generation of socio-technical systems synthetic to signal the increased interactions between human and artificial agents, and, in the footsteps of philosophers of information, we cash out agency in terms of ‘poiêsis’. We close the paper with a discussion of the potential policy implications of synthetic socio-technical system. PubDate: 2024-07-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Despite growing interest in automated (or algorithmic) decision-making (ADM), little work has been done to conceptually clarify the term. This article aims to tackle this issue by developing a conceptualization of ADM specifically tailored to organizational contexts. It has two main goals: (1) to meaningfully demarcate ADM from similar, yet distinct algorithm-supported practices; and (2) to draw internal distinctions such that different ADM types can be meaningfully distinguished. The proposed conceptualization builds on three arguments: First, ADM primarily refers to the automation of practical decisions (decisions to φ) as opposed to cognitive decisions (decisions that p). Second, rather than referring to algorithms as literally making decisions, ADM refers to the use of algorithms to solve decision problems at an organizational level. Third, since algorithmic tools by nature primarily settle cognitive decision problems, their classification as ADM depends on whether and to what extent an algorithmically generated output p has an action triggering effect—i.e., translates into a consequential action φ. The examination of precisely this p-φ relationship, allows us to pinpoint different ADM types (suggesting, offloading, superseding). Taking these three arguments into account, we arrive at the following definition: ADM refers to the practice of using algorithms to solve decision problems, where these algorithms can play a suggesting, offloading, or superseding role relative to humans, and decisions are defined as action triggering choices. PubDate: 2024-07-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We might worry that our seamless reliance on AI systems makes us prone to adopting the strange errors that these systems commit. One proposed solution is to design AI systems so that they are not phenomenally transparent to their users. This stops cognitive extension and the automatic uptake of errors. Although we acknowledge that some aspects of AI extension are concerning, we can address these concerns without discouraging transparent employment altogether. First, we believe that the potential danger should be put into perspective – many unreliable technologies are unlikely to be used transparently precisely because they are unreliable. Second, even an agent who transparently employs a resource may reflect on its reliability. Finally, agents can rely on a process transparently and be yanked out of their transparent use when it turns problematic. When an agent is responsive to the reliability of their process in this way, they have epistemically integrated it, and the beliefs they form with it are formed responsibly. This prevents the agent from automatically incorporating problematic beliefs. Responsible (and transparent) use of AI resources – and consequently responsible AI extension – is hence possible. We end the paper with several design and policy recommendations that encourage epistemic integration of AI-involving belief-forming processes. Keywords: phenomenal transparency, artificial intelligence, cognitive extension, adversarial attack, cognitive integration. PubDate: 2024-07-15
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This article critically examines the domination exerted by big digital companies on the current social, economic, and political context of modern societies, with a particular focus on the implications for the proper functioning of democracy. The objective of this article is to introduce and develop the concept of datafeudalism, expose its emergence for the proper functioning of modern societies and democracy, and to propose courses of action to reverse this situation. To achieve this purpose, firstly, the evolution from surveillance capitalism to datafeudalism will be discussed. Secondly, the structures and operating logic of data feudalism will be analyzed. Thirdly, the harmful impacts of datafeudalism on the proper functioning of the democratic systems of the European Union will be examined. Finally, an attempt will be made to outline courses of action that will make it possible to reverse the situation of economic, social and political tyranny exercised by big digital companies through datafeudalism. PubDate: 2024-07-15
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: One of the fundamental research goals for explanation-based Natural Language Inference (NLI) is to build models that can reason in complex domains through the generation of natural language explanations. However, the methodologies to design and evaluate explanation-based inference models are still poorly informed by theoretical accounts on the nature of explanation. As an attempt to provide an epistemologically grounded characterisation for NLI, this paper focuses on the scientific domain, aiming to bridge the gap between theory and practice on the notion of a scientific explanation. Specifically, the paper combines a detailed survey of the modern accounts of scientific explanation in Philosophy of Science with a systematic analysis of corpora of natural language explanations, clarifying the nature and function of explanatory arguments from both a top-down (categorical) and a bottom-up (corpus-based) perspective. Through a mixture of quantitative and qualitative methodologies, the presented study allows deriving the following main conclusions: (1) Explanations cannot be entirely characterised in terms of inductive or deductive arguments as their main function is to perform unification; (2) An explanation typically cites causes and mechanisms that are responsible for the occurrence of the event to be explained; (3) While natural language explanations possess an intrinsic causal-mechanistic nature, they are not limited to causes and mechanisms, also accounting for pragmatic elements such as definitions, properties and taxonomic relations; (4) Patterns of unification naturally emerge in corpora of explanations even if not intentionally modelled; (5) Unification is realised through a process of abstraction, whose function is to provide the inference mechanism for subsuming the event to be explained under recurring patterns and high-level regularities. The paper contributes to addressing a fundamental gap in classical theoretical accounts on the nature of scientific explanations and their materialisation as linguistic artefacts. This characterisation can support a more principled design and evaluation of explanation-based AI systems which can better interpret, process, and generate natural language explanations. PubDate: 2024-07-15
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Modern society makes extensive use of automated algorithmic decisions, fueled by advances in artificial intelligence. However, since these systems are not perfect, questions about fairness are increasingly investigated in the literature. In particular, many authors take a Rawlsian approach to algorithmic fairness. Based on complications with this approach identified in the literature, this article discusses how Rawls’s theory in general, and especially the difference principle, should reasonably be applied to algorithmic fairness decisions. It is observed that proposals to achieve Rawlsian algorithmic fairness often aim to uphold the difference principle in the individual situations where automated decision-making occurs. However, the Rawlsian difference principle applies to society at large and does not aggregate in such a way that upholding it in constituent situations also upholds it in the aggregate. But such aggregation is a hidden premise of many proposals in the literature and its falsity explains many complications encountered. PubDate: 2024-07-13
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This article criticises David Chalmers’ ‘Reality+’ by interrogating its distinction of virtual reality (VR) from 2D, non-VR video games, a distinction made on the grounds that VR is immersive and these types of video games are not because immersion is a distinct characteristic of 3D perceptually represented VR. Building on the Balcerak Jacksons’ account of ‘representational immersion’, which they acknowledge has ‘perceptual’ and ‘non-perceptual’ elements, I develop an account of ‘non-perceptual representational immersion’ and use it to critique Chalmers’ treatment of VR, immersion, and video games, indirectly problematizing his central ontological claim that ‘virtual reality is genuine reality’ and the reflections he makes about future society which rest upon it. I argue that video games are rich in non-perceptual representational immersion, which includes the representation of non-perceptual information about the structure and behaviour of a virtual world and its systems, the non-perceptual cognitive states that are generated in the player by the virtual world, and the immersion which comes from the constitutively cognitive nature of the player’s engagement with the virtual world, a cognitive engagement construed in an enactive sense. These are just as crucial to the phenomenon of digital immersion as perception. Immersion generally is a much more complicated phenomenon than Chalmers allows for, and it is better conceived as taking place on a spectrum which includes both 3D and 2D experiences of virtual worlds, not as hinging on 3D perceptual representation. I use ‘immersive simulation’ games as a case study. PubDate: 2024-07-08
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract A promising paradigm in the discourse on Technology Ethics and the Ethics of Artificial Intelligence has manifested in scholarly literature and practical application, entitled “Ethics by Design for A.I.”. The significance of this approach lies in its divergence from conventional assessments focused on the utilization or subsequent societal impact of technological artifacts. Instead, it directs attention to the technological design processes of digital artifacts. While the advocates of this theory acknowledge certain limitations of their theory, the complexity of the framework requires a nuanced examination of additional philosophical, political and economic challenges that remain unresolved. This paper endeavors to clarify the crucial dimensions and conceptual foundations of Ethics by Design for A.I., while furthermore, it provides a comprehensive exploration of contemporary challenges encountered by this paradigm. The principal aim is to delineate the risks and challenges inherent in this approach and to evaluate whether it can resolve the challenges posed or an exceedance is required, one that will lead to forming a different kind of Ethics by Design for A.I. founded on a whole new set of presuppositions and socio-political insights. PubDate: 2024-07-04
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this reply, I respond to Joshua C. Gellers’ commentary on my article “Not Relational Enough' Towards an Eco-Relational Approach in Robot Ethics” (Puzio, 2024a), in which I present a deeply relational, “eco-relational approach”. This approach asserts that it is necessary to consider the relationality with non-human entities such as animals and technology on a deeper level than has been done in robot ethics so far. This disrupts traditional ethical concepts. In his commentary “Not Ecological Enough: A Commentary on an Eco-Relational Approach in Robot Ethics” (2024), Gellers raises criticisms of my method and the consequences of the eco-relational approach. In this reply, I address these criticisms and focus especially on the ecological responsibility of the eco-relational approach. PubDate: 2024-07-02