Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Montefiore and Formosa (Ethics Inf Technol 24:31, 2022) provide a useful way of narrowing the Gamer’s Dilemma to cases where virtual murder seems morally permissible, but not virtual child molestation. They then resist the dilemma by theorising that the intuitions supporting it are not moral. In this paper, I consider this theory to determine whether the dilemma has been successfully resisted. I offer reason to think that, when considering certain variations of the dilemma, Montefiore and Formosa’s theory may not be the most likely theory available to us. PubDate: 2023-05-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Love, sex, and physical intimacy are some of the most desired goods in life and they are increasingly being sought on dating apps such as Tinder, Bumble, and Badoo. For those who want a leg up in the chase for other people’s attention, almost all of these apps now offer the option of paying a fee to boost one’s visibility for a certain amount of time, which may range from 30 min to a few hours. In this article, I argue that there are strong moral grounds and, in countries with laws against unconscionable contracts, legal ones for thinking that the sale of such visibility boosts should be regulated, if not banned altogether. To do so, I raise two objections against their unfettered sale, namely that it exploits the impaired autonomy of certain users and that it creates socio-economic injustices. PubDate: 2023-05-18
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance, (6) outdated norms, and (7) technologically-induced norm change. Because discussions of why norm-compliant robots can be problematic are noticeably absent from the robot and machine ethics literature, this paper fills an important research gap. We argue that it is critical for researchers to take these issues into account if they wish to make norm-compliant robots. PubDate: 2023-04-26
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Scholars, policymakers and organizations in the EU, especially at the level of the European Commission, have turned their attention to the ethics of (trustworthy and human-centric) Artificial Intelligence (AI). However, there has been little reflexivity on (1) the history of the ethics of AI as an institutionalized phenomenon and (2) the comparison to similar episodes of “ethification” in other fields, to highlight common (unresolved) challenges. Contrary to some mainstream narratives, which stress how the increasing attention to ethical aspects of AI is due to the fast pace and increasing risks of technological developments, Science and Technology Studies(STS)-informed perspectives highlight that the rise of institutionalized assessment methods indicates a need for governments to gain more control of scientific research and to bring EU institutions closer to the public on controversies related to emerging technologies. This article analyzes how different approaches of the recent past (i.e. bioethics, technology assessment (TA) and ethical, legal and social (ELS) research, Responsible Research and Innovation (RRI)) followed one another, often “in the name of ethics”, to address previous criticisms and/or to legitimate certain scientific and technological research programs. The focus is on how a brief history of the institutionalization of these approaches can provide insights into present challenges to the ethics of AI related to methodological issues, mobilization of expertise and public participation. PubDate: 2023-04-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Advances in AI research have brought increasingly sophisticated capabilities to AI systems and heightened the societal consequences of their use. Researchers and industry professionals have responded by contemplating responsible principles and practices for AI system design. At the same time, defense institutions are contemplating ethical guidelines and requirements for the development and use of AI for warfare. However, varying ethical and procedural approaches to technological development, research emphasis on offensive uses of AI, and lack of appropriate venues for multistakeholder dialogue have led to differing operationalization of responsible AI principles and practices among civilian and defense entities. We argue that the disconnect between civilian and defense responsible development and use practices leads to underutilization of responsible AI research and hinders the implementation of responsible AI principles in both communities. We propose a research roadmap and recommendations for dialogue to increase exchange of responsible AI development and use practices for AI systems between civilian and defense communities. We argue that generating more opportunities for exchange will stimulate global progress in the implementation of responsible AI principles. PubDate: 2023-04-15
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This article presents a systematic literature review documenting how technical investigations have been adapted in value sensitive design (VSD) studies from 1996 to 2023. We present a systematic review, including theoretical and applied studies that either discuss or conduct technical investigations in VSD. This systematic review contributes to the VSD community when seeking to further refine the methodological framework for carrying out technical investigations in VSD. PubDate: 2023-04-13
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Militairy technology is developing at a rapid pace and we are seeing a growing number of weapons with increasing levels of autonomy being developed and deployed. This raises various legal, ethical, and security concerns. The absence of clear international rules setting limits and governing the use of autonomous weapons is extremely concerning. There is an urgent need for the international community to work together towards a treaty not only to safeguard ethical and legal norms, but also for our shared security. This article explains why a treaty on autonomous weapons is needed and achievable. It goes into what a treaty could consist of to establish an international norm and set rules and limits on autonomy in weapon systems. PubDate: 2023-03-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In this paper I argue that some examples of what we label ‘algorithmic bias’ would be better understood as cases of institutional bias. Even when individual algorithms appear unobjectionable, they may produce biased outcomes given the way that they are embedded in the background structure of our social world. Therefore, the problematic outcomes associated with the use of algorithmic systems cannot be understood or accounted for without a kind of structural account. Understanding algorithmic bias as institutional bias in particular (as opposed to other structural accounts) has at least two important upshots. First, I argue that the existence of bias that is intrinsic to certain institutions (whether algorithmic or not) suggests that at least in some cases, the algorithms now substituting as pieces of institutional norms or rules are not “fixable” in the relevant sense, because the institutions they help make up are not fixable. Second, I argue that in other cases, changing the algorithms being used within our institutions (rather than getting rid of them entirely) is essential to changing the background structural conditions of our society. PubDate: 2023-03-21
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Informed consent bears significant relevance as a legal basis for the processing of personal data and health data in the current privacy, data protection and confidentiality legislations. The consent requirements find their basis in an ideal of personal autonomy. Yet, with the recent advent of the global pandemic and the increased use of eHealth applications in its wake, a more differentiated perspective with regards to this normative approach might soon gain momentum. This paper discusses the compatibility of a moral duty to share data for the sake of the improvement of healthcare, research, and public health with autonomy in the field of data protection, privacy and medical confidentiality. It explores several ethical-theoretical justifications for a duty of data sharing, and then reflects on how existing privacy, data protection, and confidentiality legislations could obstruct such a duty. Consent, as currently defined in the General Data Protection Regulation – a key legislative framework providing rules on the processing of personal data and data concerning health – and in the recommendation of the Council of Europe on the protection of health-related data – explored here as soft-law – turns out not to be indispensable from various ethical perspectives, while the requirement of consent in the General Data Protection Regulation and the recommendation nonetheless curtails the full potential of a duty to share medical data. Also other legal grounds as possible alternatives for consent seem to constitute an impediment. PubDate: 2023-03-15 DOI: 10.1007/s10676-023-09697-8
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Machine learning (ML) techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing. Likewise, in the physical world, ML models are critical components in autonomous agents such as robotic surgeons and self-driving cars. Among the many ethical dimensions that arise in the use of ML technology in such applications, analyzing morally permissible actions is both immediate and profound. For example, there is the potential for learned algorithms to become biased against certain groups. More generally, in so much that the decisions of ML models impact society, both virtually (e.g., denying a loan) and physically (e.g., driving into a pedestrian), notions of accountability, blame and responsibility need to be carefully considered. In this article, we advocate for a two-pronged approach ethical decision-making enabled using rich models of autonomous agency: on the one hand, we need to draw on philosophical notions of such as beliefs, causes, effects and intentions, and look to formalise them, as attempted by the knowledge representation community, but on the other, from a computational perspective, such theories need to also address the problems of tractable reasoning and (probabilistic) knowledge acquisition. As a concrete instance of this tradeoff, we report on a few preliminary results that apply (propositional) tractable probabilistic models to problems in fair ML and automated reasoning of moral principles. Such models are compilation targets for certain types of knowledge representation languages, and can effectively reason in service some computational tasks. They can also be learned from data. Concretely, current evidence suggests that they are attractive structures for jointly addressing three fundamental challenges: reasoning about possible worlds + tractable computation + knowledge acquisition. Thus, these seems like a good starting point for modelling reasoning robots as part of the larger ecosystem where accountability and responsibility is understood more broadly. PubDate: 2023-03-11 DOI: 10.1007/s10676-023-09692-z
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the first to suggest that responsibility gaps should sometimes be welcomed, our argument is novel. Others have argued that responsibility gaps should sometimes be welcomed because they can reduce or eliminate the psychological burdens caused by tragic moral choice-situations. By contrast, our argument explains why responsibility gaps should sometimes be welcomed even in the absence of tragic moral choice-situations, and even in the absence of psychological burdens. PubDate: 2023-02-24 DOI: 10.1007/s10676-023-09699-6
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The introduction of Autonomous Military Systems (AMS) onto contemporary battlefields raises concerns that they will bring with them the possibility of a techno-responsibility gap, leaving insecurity about how to attribute responsibility in scenarios involving these systems. In this work I approach this problem in the domain of applied ethics with foundational conceptual work on autonomy and responsibility. I argue that concerns over the use of AMS can be assuaged by recognising the richly interrelated context in which these systems will most likely be deployed. This will allow us to move beyond the solely individualist understandings of responsibility at work in most treatments of these cases, toward one that includes collective responsibility. This allows us to attribute collective responsibility to the collectives of which the AMS form a part, and to account for the distribution of burdens that follows from this attribution. I argue that this expansion of our responsibility practices will close at least some otherwise intractable techno-responsibility gaps. PubDate: 2023-02-23 DOI: 10.1007/s10676-023-09696-9
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The article focuses on the inconsistency between the European Commission’s position on excluding military AI from its emerging AI policy, and at the same time EU policy initiatives targeted at supporting military and defence elements of AI on the EU level. It leads to the question, what, then, does the debate on military AI suggest to the EU’s actorness discussed in the light of Europe as a power debate with a particular focus on Normative Power Europe, Market Power Europe, and Military Power Europe. By employing discourse analysis, the article examines the EU’s AI strategic discourse, consisting of selected AI-related policy documents from different EU institutions. As a result, the article proposes the Military Power Europe definition based on four categories proposed from the Europe as power debate: ways of action, self-definition, preferred international engagement, and the role of the military. It argues that alongside normative proposals for military AI governance, there are evident desires for militarization in the context of AI development, and a considerable role for the military in the future directions of the EU’s digital and security policies. Despite the inconsistency among EU institutions, military AI appears to be actively discussed within the selected discourse, and so is a part of the EU’s emerging AI policy. PubDate: 2023-02-17 DOI: 10.1007/s10676-023-09684-z
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: A classic objection to autonomous weapon systems (AWS) is that these could create so-called responsibility gaps, where it is unclear who should be held responsible in the event that an AWS were to violate some portion of the law of armed conflict (LOAC). However, those who raise this objection generally do so presenting it as a problem for AWS as a whole class of weapons. Yet there exists a rather wide range of systems that can be counted as “autonomous weapon systems”, and so the objection is too broad. In this article I present a taxonomic approach to the objection, examining a number of systems that would count as AWS under the prevalent definitions provided by the United States Department of Defense and the International Committee of the Red Cross, and I show that for virtually all such systems there is a clear locus of responsibility which presents itself as soon as one focuses on specific systems, rather than general notions of AWS. In developing these points, I also suggest a method for dealing with near-future types of AWS which may be thought to create situations where responsibility gaps can still arise. The main purpose of the arguments is, however, not to show that responsibility gaps do not exist or can be closed where they do exist. Rather, it is to highlight that any arguments surrounding AWS must be made with reference to specific weapon platforms imbued with specific abilities, subject to specific limitations, and deployed to specific times and places for specific purposes. More succinctly, the arguments show that we cannot and should not aim to treat AWS as if all of these shared all morally relevant features, but instead on a case-by-case basis. Thus, we must contend with the realities of weapons development and deployment, and tailor our arguments and conclusions to those realities, and with an eye to what facts obtain for particular systems fulfilling particular combat roles. PubDate: 2023-02-16 DOI: 10.1007/s10676-023-09690-1
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In this paper we introduce a computational control framework that can keep AI-driven military autonomous devices operating within the boundaries set by applicable rules of International Humanitarian Law (IHL) related to targeting. We discuss the necessary legal tests and variables, and introduce the structure of a hypothetical IHL-compliant targeting system. PubDate: 2023-02-15 DOI: 10.1007/s10676-023-09682-1
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The ongoing debate on the ethics of using artificial intelligence (AI) in military contexts has been negatively impacted by the predominant focus on the use of lethal autonomous weapon systems (LAWS) in war. However, AI technologies have a considerably broader scope and present opportunities for decision support optimization across the entire spectrum of the military decision-making process (MDMP). These opportunities cannot be ignored. Instead of mainly focusing on the risks of the use of AI in target engagement, the debate about responsible AI should (i) concern each step in the MDMP, and (ii) take ethical considerations and enhanced performance in military operations into account. A characterization of the debate on responsible AI in the military, considering both machine and human weaknesses and strengths, is provided in this paper. We present inroads into the improvement of the MDMP, and thus military operations, through the use of AI for decision support, taking each quadrant of this characterization into account. PubDate: 2023-02-14 DOI: 10.1007/s10676-023-09683-0
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Artificial Intelligence (AI) offers numerous opportunities to improve military Intelligence, Surveillance, and Reconnaissance operations. And, modern militaries recognize the strategic value of reducing civilian harm. Grounded in these two assertions we focus on the transformative potential that AI ISR systems have for improving the respect for and protection of humanitarian relief operations. Specifically, we propose that establishing an interface for humanitarian organizations to military AI ISR systems can improve the current state of ad-hoc humanitarian notification systems, which are notoriously unreliable and ineffective for both parties to conflict and humanitarian organizations. We argue that such an interface can improve military awareness and understanding while also ensuring that states better satisfy their international humanitarian law obligations to respect and protect humanitarian relief personnel. PubDate: 2023-02-13 DOI: 10.1007/s10676-023-09681-2
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The ethical Principle of Proportionality requires combatants not to cause collateral harm excessive in comparison to the anticipated military advantage of an attack. This principle is considered a major (and perhaps insurmountable) obstacle to ethical use of autonomous weapon systems (AWS). This article reviews three possible solutions to the problem of achieving Proportionality compliance in AWS. In doing so, I describe and discuss the three components Proportionality judgments, namely collateral damage estimation, assessment of anticipated military advantage, and judgment of “excessiveness”. Some possible approaches to Proportionality compliance are then presented, such as restricting AWS operations to environments lacking civilian presence, using AWS in targeted strikes in which proportionality judgments are pre-made by human commanders, and a ‘price tag’ approach of pre-assigning acceptable collateral damage values to military hardware in conventional attritional warfare. The article argues that application of these three compliance methods would result in AWS’ achieving acceptable Proportionality compliance levels in many combat environments and scenarios, allowing AWS to perform most key tasks in conventional warfare. PubDate: 2023-02-13 DOI: 10.1007/s10676-023-09689-8