Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper proposes a new normative framework to think about Big Tech reform. Focusing on the case of digital communication, I argue that rethinking the corporate purpose of social media companies is a distinctive entry point to the debate on how to render the powers of tech corporations democratically legitimate. I contend that we need to strive for a reform that redefines the corporate purpose of social media companies. In this view, their purpose should be to create and maintain a free, egalitarian, and democratic public sphere rather than profit seeking. This political reform democratically contains corporate power in two ways: first, the legally enforceable fiduciary duties of corporate boards are reconceptualized in relation to democratic purposes rather than shareholder interests. Second, corporate governance structures should be redesigned to ensure that the abstract purpose is realized through representatives whose incentives align with the existence of a democratic public sphere. My argument complements radical proposals such as platform socialism by drawing a connection between democratizing social media governance and identifying the proper purpose of social media companies. PubDate: 2025-02-20 DOI: 10.1007/s13347-025-00855-y
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Creating a legacy is often seen as a way to circumvent mortality. At the very least, it is a way to ensure that someone’s ideas live on and their influence on others extends beyond their own lifetime. Common examples of such legacies are cultural products, such as books, music, or art, that one creates and leaves behind. In light of rapid advances in artificial intelligence research, it is conceivable that it will soon become possible – and, to some extent, it already is – to create a new type of legacy in the form of a virtual entity, such as a chatbot or an avatar, that acts as a digital replica of someone. In this paper, we investigate in what sense, if any, one could say that the original person continues to exist in their digital replica. Taking Derek Parfit’s work on the nature of personal identity as our inspiration and starting point, we suggest under what circumstances one could (and should) perceive such replicas as genuine extensions of one's personhood. PubDate: 2025-02-17 DOI: 10.1007/s13347-025-00854-z
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper asks three fundamental questions on the nature of trust: What is trust' What is trustworthiness' When is trust warranted' These discussions are then applied to the context of Human-Robot Interaction (HRI), asking whether we can trust social robots, whether they can be trustworthy, and, lastly, whether we should trust them. After revising the literature on the nature of trust and reliance on one hand, and on trust in social robots, considering both properties-based and non-properties-based views, on the other hand, this paper defends that, given the current state of technology, we can be subjects of a paradoxical scenario in which there is trust without trustworthiness, i.e., human users that interact with social robots can develop something resembling interpersonal trust towards an artificial entity which cannot be trustworthy. This occurs because we perceive and treat social robots as trustworthy entities, while they seem to lack certain properties that would make them capable of being trustworthy (as well as untrustworthy). Understanding our psychology in HRI and trying to discern what social robots are (and are not) is capital when confronted with ethical issues. Some of the ethical issues that arise in the context of trust without trustworthiness will be considered to address the debate about if we should trust social robots. This paper concludes that we should, at least for now, not trust social robots, given the potential harms that can be done and the responsibility gaps that might appear when these harms are to be repaired. PubDate: 2025-02-15 DOI: 10.1007/s13347-025-00850-3
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This study adopts a hermeneutic, practice-based approach to Responsible Innovation to explore how a reflective and proactive attitude can be implemented in a start-up context. We hypothesised that a moral hermeneutics framework - rooted in post-phenomenology and theories on technology-induced value change - could provide a way to understand how practitioners in a start-up make sense of the different kinds of responsibilities in their work, balancing professional demands and standards of excellence with broader ecological and social commitments. Using in-depth interviews with the team members of a start-up R&D laboratory, we explored how they interpret their responsibilities-as-(moral)-obligations. Our findings suggest that the syntactical ways team members make sense of the relationship between these responsibilities can be useful for understanding how reflexivity can surface in this environment. We conclude by proposing that less conciliatory interpretations of conflicting responsibilities may lead to a collective search for practical solutions addressing these tensions, as long as it is embedded in a collective dialogue involving the other members’ moral perspectives and technical expertise. PubDate: 2025-02-13 DOI: 10.1007/s13347-025-00848-x
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Digital platforms allow wildlife enthusiasts to share information with larger audiences than ever before. However, by heightening awareness and human interaction, they can threaten the well-being of non-human animals (henceforth “animals”). One example of this is increased stress to Snowy Owls (Bubo scandiacus) from close observation by large numbers of people made possible by accurate, timely, and widely-distributed location information. In this paper, we examine the ethics of animal privacy, wildlife observation, and information sharing on digital platforms, using Snowy Owls as a case study. We begin by reviewing Snowy Owl migration and population statistics, trends in information sharing about and observation of Snowy Owls, and resulting harms to Snowy Owls. We then argue that animals have morally weighty privacy interests that ground human obligations to protect their privacy. These obligations are strengthened by additional concerns about species protection and ecological conservation. Our arguments support an array of privacy protections, including obligations on the part of wildlife observers, researchers, and other groups. However, because the catalyst for this paper is novel information technologies, we make a focused set of recommendations applicable to digital platforms. Specifically, we argue that platforms ought to restrict public information sharing when disclosure is likely to endanger individual animals, species, or ecosystems. PubDate: 2025-02-12 DOI: 10.1007/s13347-025-00841-4
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This article defends the claim that diversions, which are actions that cause distraction, are a unique way to modify someone’s behavior and that they are morally salient. While the focus of this article is dedicated to understanding the moral features of attention and diversion, it is crucial to keep in mind that the moral evaluation of these concepts is most pressing within a technological society deeply intertwined with an attention economy. We are inundated with distraction technologies, which are technologies whose function partly or wholly depends on capturing the attention of its targets. Distraction technologies are widely used to capture the attention of billions of people. Once we come to treat diversions as unique moral actions, it will be clear that the most serious violations to our right to attention can be committed by those who control these technologies. PubDate: 2025-02-12 DOI: 10.1007/s13347-025-00849-w
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Computational Reliabilism (CR) has emerged as a promising framework for assessing the trustworthiness of AI systems, particularly in domains where complete transparency is infeasible. However, the rise of sophisticated adversarial attacks poses a significant challenge to CR’s key reliability indicators. This paper critically examines the robustness of CR in the face of evolving adversarial threats, revealing the limitations of verification and validation methods, robustness analysis, implementation history, and expert knowledge when confronted with malicious actors. Our analysis suggests that CR, in its current form, is inadequate to address the dynamic nature of adversarial attacks. We argue that while CR’s core principles remain valuable, the framework must be extended to incorporate adversarial resilience, adaptive reliability criteria, and context-specific reliability thresholds. By embracing these modifications, CR can evolve to provide a more comprehensive and resilient approach to assessing AI reliability in an increasingly adversarial landscape. PubDate: 2025-02-12 DOI: 10.1007/s13347-025-00851-2
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This essay provides a framework for characterizing changes to social phenomena that accompany the digitalization of society. It begins by discharging two preliminary tasks: presenting the social ontology used in the analysis—a version of practice theory—and surveying extant general accounts of sociodigital phenomena to give readers a sense of the accounts on offer and to indicate through contrast how broad my account is. The starting point for the essay’s own account is the great number and variety of sociodigital phenomena and changes today. The essay proposes to base a general account of this profusion on the notion of form, where the form of something is the features it exhibits in key dimensions of variation that characterize a population to which the something belongs. It is claimed that the four central dimensions of variation for social, and thus sociodigital, phenomena are (1) constitution, (2) interactions and associations, (3) spaces and times, and (4) meaning and significance. Following discussion of this claim, sketches are offered of the forms of four pre-digital social phenomena and four sociodigital ones. A final section offers general observations about how sociodigital phenomena are different in these four dimensions from their pre-digital counterparts. PubDate: 2025-02-11 DOI: 10.1007/s13347-025-00853-0
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper explores the limitations of artificial intelligence (AI) in fulfilling the obligations inherent in close personal relationships, particularly in the context of loneliness. While AI technologies may offer some of the goods that we associate with close personal relationships, they lack the capacity for genuine commitment and individualized care that characterize human interactions. The finitude of human existence—our cognitive, emotional, and temporal limitations— and our capacity to make judgments concerning distinct kinds of value imbues human relationships with significance that AI cannot replicate. PubDate: 2025-02-07 DOI: 10.1007/s13347-025-00845-0
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In a recent article in this journal, Baron (2025) argues that we can appropriately trust unexplainable artificial intelligence (AI) systems, so explainability is not necessary for AI’s trustworthiness. In this commentary, I argue that Baron is wrong. I first offer a positive argument for the claim that explainability is necessary for trustworthiness. Drawing on this argument, I then show that Baron’s argument for thinking otherwise fails. PubDate: 2025-02-05 DOI: 10.1007/s13347-025-00847-y
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Achieving trustworthy AI is increasingly considered an essential desideratum to integrate AI systems into sensitive societal fields, such as criminal justice, finance, medicine, and healthcare, among others. For this reason, it is important to spell out clearly its characteristics, merits, and shortcomings. This article is the first survey in the specialized literature that maps out the philosophical landscape surrounding trust and trustworthiness in AI. To achieve our goals, we proceed as follows. We start by discussing philosophical positions on trust and trustworthiness, focusing on interpersonal accounts of trust. This allows us to explain why trust, in its most general terms, is to be understood as reliance plus some “extra factor”. We then turn to the first part of the definition provided, i.e., reliance, and analyze two opposing approaches to establishing AI systems’ reliability. On the one hand, we consider transparency and, on the other, computational reliabilism. Subsequently, we focus on debates revolving around the “extra factor”. To this end, we consider viewpoints that most actively resist the possibility and desirability of trusting AI systems before turning to the analysis of the most prominent advocates of it. Finally, we take up the main conclusions of the previous sections and briefly point at issues that remain open and need further attention. PubDate: 2025-02-04 DOI: 10.1007/s13347-025-00843-2
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In anticipation of the market introduction of highly and fully automated vehicles, regulations for their behavior in public road traffic are emerging in various countries and regions. Yet, as we show using the example of EU and German regulations, these rules are both incomplete and exceptionally vague. In this paper we introduce three traffic scenarios highlighting conflicting ethical, legal, and utility-related claims, and perform a legal analysis with regards to the expected behavior of AVs in these scenarios. We show that the existing regulatory framework disregards the realities of algorithmic decision-making in automated vehicles, such as the incomplete and imprecise perception of their environment and the probabilistic nature of their predictions. Importantly, the current regulations are written in abstract language addressing human interpreters rather than the precise logical-numerical computer code. We argue that the required interpretation and translation of the abstract legal language into the logical-numerical domain is so ambiguous that the regulations as they stand fail to guide or limit automated vehicle behavior in any meaningful way. This comes with significant ethical implications, as the interpretation and translation is unavoidable and, if not provided by regulatory bodies, will have to be performed by manufacturers. We argue that ethical decisions with significant impact on public safety must not be delegated to private companies, and thus, regulatory frameworks need significant improvements. PubDate: 2025-02-03 DOI: 10.1007/s13347-025-00846-z
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Current Generative Artificial Intelligence models have become incredibly close to the human level of linguistic and artistic excellence, defying our conception of artworks as uniquely human products, resulting in an authorship problem, i.e., whether artificial agents can be regarded as genuine authors of their products. This paper provides a definition of institutional authorship to evaluate this possibility, using John Searle’s Speech Act Theory and Theory of Institutions. To apply the definition, we focus on artistic cases, assuming the institutional theory of art. According to the institutional authorship definition proposed, authorship is the product of an agent’s declarative acts in the context of relevant institutions granting the agent’s deontic powers for performing such acts. These acts depend on sincerity conditions encompassing intentional states, which bind the product with an agent. We conclude that artificial agents cannot satisfy the institutional definition of authorship due to their inability to be institutional agents, meaning they cannot be authors. PubDate: 2025-01-29 DOI: 10.1007/s13347-025-00839-y
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper offers a political analysis of the platform and sharing economy—an economic model in which digital platforms facilitate social and economic interactions. Its two central models, mainstream and cooperative platforms, offer similar applications and services (e.g., home-sharing, food delivery), but fundamentally differ in their ownership and governance structures, economic models, and technical designs. Building on literature from the politics of technology (PoT), the paper develops an approach for the political analysis of platform technologies, combining central components from the works of Winner, Feenberg, and Pfaffenberger. This approach is then applied to analyze the platform and sharing economy, highlighting the political significance of platform technologies. The analysis reveals three key insights. First, when incorporated into particular social arrangements, digital platforms become means for shaping social realities rather than mere tools for specific uses. Second, mainstream platforms perpetuate capitalist conditions in the digital sphere and therefore necessitate platform capitalism to function, whereas cooperative platforms resist and undermine it. Third, the dynamics between the platform models embody a struggle over the question of the good life in the digital economy. Additionally, the paper uncovers a philosophical weakness in Winner’s definition of “inherently political technologies” that warrants further attention in PoT literature. PubDate: 2025-01-28 DOI: 10.1007/s13347-025-00840-5
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Medical artificial intelligence (AI) systems are value-laden technologies that can simultaneously encourage and discourage conflicting values that may all be relevant for the pursuit of justice. I argue that the predominant theory of healthcare justice, the Rawls-inspired approach of Norman Daniels, neither adequately acknowledges such conflicts nor explains if and how they can resolved. By juxtaposing Daniels’s theory of healthcare justice with Axel Honneth’s and Nancy Fraser’s respective theories of justice, I draw attention to one such conflict. Medical AI may improve the distribution of opportunity qua health while simultaneously mis-recognizing patients and thereby reducing their self-respect. I argue that justly resolving this conflict will at times require greater inclusion of those mis-recognized in deliberation about medical AI, and consider what such inclusion may entail. PubDate: 2025-01-27 DOI: 10.1007/s13347-025-00842-3
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In view of the dramatic advancements in the development of artificial intelligence technology in recent years, it has become a commonplace to demand that AI systems be trustworthy. This view presupposes that it is possible to trust AI technology in the first place. The aim of this paper is to challenge this view. In order to do that, it is argued that the philosophy of trust really revolves around the problem of how to square the epistemic and the normative dimensions of trust. Given this double nature of trust it is possible to extract a threefold challenge to the defenders of the possibility of AI trust without presupposing any particular trust theory. They have to show (1) how trust in AI systems is more than mere reliance; (2) how AI systems can become objects of normative expectations; and (3) how the resulting attitude gives human agents reassurance in their interactions with AI systems. In order to demonstrate how difficult this task is, the threefold challenge is then applied to two recent accounts that defend the possibility of trust in AI systems. By way of conclusion it is suggested that instead of trusting AI systems, we should strive to make them reliable. PubDate: 2025-01-24 DOI: 10.1007/s13347-024-00820-1
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Aylsworth and Castro’s recent paper, Should I Use ChatGPT to Write My Papers', argues that students in the humanities have a moral obligation to refrain from using AI tools such as ChatGPT for writing assignments. Their claim is that writing is an autonomy-fostering activity, essential for intellectual growth and critical reflection, and that every agent has the moral duty to respect their own autonomy. While the authors raise significant ethical concerns, the paper lacks the identification of which specific features of writing promote autonomy. Writing is a multifaceted process which involves distinct activities, not all of which equally contribute to fostering autonomy. Furthermore, writing can be a shared endeavour, for example when co-authoring a paper with a colleague. This prompts the question of which features or activities exactly are the ones to support autonomy. The usage in AI in education is a double-edged sword: while some scholars emphasize the limitations of generated texts, others have reported positive outcomes from using AI tools in educational settings. This invites a deeper examination of how technology can be designed or used to support, rather than mine, students’ autonomy. PubDate: 2025-01-20 DOI: 10.1007/s13347-024-00831-y
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Digital duplicates reduce the scarcity of individuals and thus may impact their instrumental and intrinsic value. I here expand upon this idea by introducing the notion of collective scarcity, which pertains to the limitations faced by social groups in maintaining their size, cohesion and function. PubDate: 2025-01-16 DOI: 10.1007/s13347-024-00832-x