Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The rapid adoption of digital technologies has revolutionized business operations and introduced emerging concepts such as Digital Twin (DT) technology, which has the potential to predict system responses before they occur, making it an attractive option for smart and sustainable tourism. However, implementing DT software systems poses significant challenges, including compliance with regulations and effective communication among stakeholders, and concerns surrounding security, privacy, and trust with the use of big data. To address these challenges, this paper proposes a documentation framework for architectural decisions (DFAD) that applies the concept of big data governance to the digital system. The framework aims to ensure accountability, transparency, and trustworthiness while adhering to rules and regulations. To demonstrate its applicability, a case study and three case scenarios on the potential use of Mobile Positioning Data (MPD) in Indonesia for DT technology in smart and sustainable tourism were examined. The paper highlights the benefits of DFAD in shaping stakeholder communication and human–machine interactions while leveraging the potential of MPD to measure tourism statistics by Statistics Indonesia since 2016. Not only the documentation framework promotes compliance with regulations, but it also facilitates effective communication among stakeholders and enhances trust and transparency in the use of big data in DT technology for smart and sustainable tourism. This paper emphasizes the importance of effective big data governance and its potential to promote sustainable tourism practices. The multidisciplinarity approach on political science, software engineering, tourism, and official statistics provides an opportunity for academic contribution and decision-making processes. PubDate: 2023-11-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The public health measures implemented in response to the COVID-19 pandemic have resulted in a substantially increased shared reliance on private infrastructure and digital services in areas such as healthcare, education, retail, and the workplace. This development has (i) granted a number of private actors significant (informational) power, and (ii) given rise to a range of digital surveillance practices incidental to the pandemic itself. In this paper, we reflect on these secondary consequences of the pandemic and observe that, even though collateral data disclosure and additional activity monitoring appears to have been generally socially accepted as inevitable consequences of the pandemic, part and parcel of a larger conglomeration of emergency compromises, these increased surveillance practices were not directly justified by appeals to solidarity and public health in the same way that the instigating public health measures were. Based on this observation, and given the increased reliance on private actors for maintaining the digital space, we argue that governments have a duty to (i) seek and ensure that there are justifications for collateral data disclosure and activity monitoring by private actors in the context of (future) public health emergencies like the COVID-19 pandemic, and (ii) regulate and provide accountability mechanisms for and oversight over these private surveillance practices on par with governmental essential services that engage in surveillance activities. PubDate: 2023-11-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Neurodevices that collect neural (or brain activity) data have been characterised as having the ability to register the inner workings of human mentality. There are concerns that the proliferation of such devices in the consumer-directed realm may result in the mass processing and commercialisation of neural data (as has been the case with social media data) and even threaten the mental privacy of individuals. To prevent this, some argue that all raw neural data should be conceptualised and regulated as “medical data” even if it is collected from consumer-directed devices in obviously non-clinical settings. In this paper, we argue that without a clearer formulation of what does and does not count as medical data, this approach might also uncritically enlarge the scope for medical influence and an unwarranted medicalisation of everyday aspects of mental life. Indeed, if we were to accept the position that all neural data is medical data because it offers insights into personally sensitive information (such a person’s thoughts, emotions, or intentions) then this could even unnecessarily expand the boundaries of medical data to other forms of data that otherwise seem to be non-clinical. If all brain data (neural activity data) is considered to be medical data even when collected from consumer-directed devices then we might unintentionally reduce meaningful distinctions between what is and what is not rightfully in the purview of medicine. PubDate: 2023-11-15
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Technological developments increasingly enable monitoring and steering the behavior of individuals. Enforcement of the law by means of technology can be much more effective and pervasive than enforcement by humans, such as law enforcement officers. However, it can also bypass legislators and courts and minimize any room for civil disobedience. This significantly reduces the options to challenge legal rules. This, in turn, can impede the development of legal systems. In this paper, an analogy is made with evolutionary biology to illustrate that the possibility to deviate from legal rules and existing norms is sometimes necessary for the further development of legal systems. Some room to break the law, for instance, through civil disobedience or imperfect enforcement of the law, will ensure sufficient variation. This allows for properly evolving legal systems that can continue to provide fair solutions, even when society and concepts of fairness further develop. PubDate: 2023-11-15
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract It is estimated that one in three women experience intimate partner violence (IPV) across the course of their life. The popular uptake of “smart speakers” powered by sophisticated AI means that surveillance of the domestic environment is increasingly possible. Correspondingly, there are various proposals to use smart speakers to detect or report IPV. In this paper, we clarify what might be possible when it comes to combatting IPV using existing or near-term technology and also begin the project of evaluating this project both ethically and politically. We argue that the ethical landscape looks different depending on whether one is considering the decision to develop the technology or the decision to use it once it has been developed. If activists and governments wish to avoid the privatisation of responses to IPV, ubiquitous surveillance of domestic spaces, increasing the risk posed to members of minority communities by police responses to IPV, and the danger that more powerful smart speakers will be co-opted by men to control and abuse women, then they should resist the development of this technology rather than wait until these systems are developed. If it is judged that the moral urgency of IPV justifies exploring what might be possible by developing this technology, even in the face of these risks, then it will be imperative that victim-survivors from a range of demographics, as well as government and non-government stakeholders, are engaged in shaping this technology and the legislation and policies needed to regulate it. PubDate: 2023-11-04
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Nowadays, Artificial Intelligence (AI) is present in all sectors of the economy. Consequently, both data-the raw material used to build AI systems- and AI have an unprecedented impact on society and there is a need to ensure that they work for its benefit. For this reason, the European Union has put data and trustworthy AI at the center of recent legislative initiatives. An important element in these regulations is transparency, understood as the provision of information to relevant stakeholders to support their understanding of AI systems and data throughout their lifecycle. In recent years, an increasing number of approaches for documenting AI and datasets have emerged, both within academia and the private sector. In this work, we identify the 36 most relevant ones from more than 2200 papers related to trustworthy AI. We assess their relevance from the angle of European regulatory objectives, their coverage of AI technologies and economic sectors, and their suitability to address the specific needs of multiple stakeholders. Finally, we discuss the main documentation gaps found, including the need to better address data innovation practices (e.g. data sharing, data reuse) and large-scale algorithmic systems (e.g. those used in online platforms), and to widen the focus from algorithms and data to AI systems as a whole. PubDate: 2023-10-28
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required. PubDate: 2023-10-28
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Smart Cities consist of a multitude of interconnected devices and services to, among others, enhance efficiency, comfort, and safety. To achieve these aims, smart cities rely on an interplay of measures including the deployment of interventions targeted to foster certain human behaviors, such as saving energy, or collecting and exchanging sensor and user data. Both aspects have ethical implications, e.g., when it comes to intervention design or the handling of privacy-related data such as personal information, user preferences or geolocations. Resulting concerns must be taken seriously, as they reduce user acceptance and can even lead to the abolition of otherwise promising Smart City projects. Established guidelines for ethical research and practice from the psychological sciences provide a useful framework for the kinds of ethical issues raised when designing human-centered interventions or dealing with user-generated data. This article thus reviews relevant psychological guidelines and discusses their applicability to the Smart City context. A special focus is on the guidelines’ implications and resulting challenges for certain Smart City applications. Additionally, potential gaps in current guidelines and the limits of applicability are reflected upon. PubDate: 2023-10-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Violent video games (VVGs) are a source of serious and continuing controversy. They are not unique in this respect, though. Other entertainment products have been criticized on moral grounds, from pornography to heavy metal, horror films, and Harry Potter books. Some of these controversies have fizzled out over time and have come to be viewed as cases of moral panic. Others, including moral objections to VVGs, have persisted. The aim of this paper is to determine which, if any, of the concerns raised about VVGs are legitimate. We argue that common moral objections to VVGs are unsuccessful, but that a plausible critique can be developed that captures the insights of these objections while avoiding their pitfalls. Our view suggests that the moral badness of a game depends on how well its internal logic expresses or encourages the players’ objectionable attitudes. This allows us to recognize that some games are morally worse than others—and that it can be morally wrong to design and play some VVGs—but that the moral badness of these games is not necessarily dependent on how violent they are. PubDate: 2023-10-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The recent proliferation of AI scandals led private and public organisations to implement new ethics guidelines, introduce AI ethics boards, and list ethical principles. Nevertheless, some of these efforts remained a façade not backed by any substantive action. Such behaviour made the public question the legitimacy of the AI industry and prompted scholars to accuse the sector of ethicswashing, machinewashing, and ethics trivialisation—criticisms that spilt over to institutional AI ethics boards. To counter this widespread issue, contributions in the literature have proposed fixes that do not consider its systemic character and are based on a top-down, expert-centric governance. To fill this gap, we propose to make use of qualified informed lotteries: a two-step model that transposes the documented benefits of the ancient practice of sortition into the selection of AI ethics boards’ members and combines them with the advantages of a stakeholder-driven, participative, and deliberative bottom-up process typical of Citizens’ Assemblies. The model permits increasing the public’s legitimacy and participation in the decision-making process and its deliverables, curbing the industry’s over-influence and lobbying, and diminishing the instrumentalisation of ethics boards. We suggest that this sortition-based approach may provide a sound base for both public and private organisations in smart societies for constructing a decentralised, bottom-up, participative digital democracy. PubDate: 2023-10-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We propose to expand the conversation around moral enhancement from direct brain-altering methods to include technological means of modifying the environments and media through which agents can achieve moral improvement. Virtual Reality (VR) based enhancement would not bypass a person’s agency, much less their capacity for reasoned reflection. It would allow agents to critically engage with moral insights occasioned by a technologically mediated intervention. Users would gain access to a vivid ‘experience machine’ that allows for embodied presence and immersion in a virtual world that meaningfully replicates relevant aspects of real life. We explore how VR can train empathy and foster moral growth in complex ways that would be inaccessible even for traditional moral education. Virtual Reality Perspective Taking is a unique medium for making empathy more reflective. PubDate: 2023-09-26
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the main sources of dispute. The lens of this typology then allows us to look anew at the potential biases in insurance pricing implied by big data and machine learning, showing that despite utopic claims, social stereotypes continue to plague data, thus threaten to unconsciously reproduce these discriminations in insurance. To counter these effects, algorithmic fairness attempts to define mathematical indicators of non-bias. We argue that this may prove insufficient, since as it assumes the existence of specific protected groups, which could only be made visible through public debate and contestation. These are less likely if the right to explanation is realized through personalized algorithms, which could reinforce the individualized perception of the social that blocks rather than encourages collective mobilization. PubDate: 2023-09-20
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract There is an ethical dilemma present when considering updating predictive clinical artificial intelligence (AI) models, which should be part of the departmental quality improvement process. One needs to consider whether withdrawing the AI model is necessary to obtain the relevant information from a naive patient population or whether to use causal inference techniques to obtain this information. Withdrawing an AI model from patient care might pose challenges if the AI model is considered standard of care, while use of causal inference will not be reliable if the relevant statistical assumptions do not hold true. Hence, each of these two updating strategies is associated with patient risks, but lack of reliable data might endanger future patients. Similarly, not withdrawing an outdated AI might also expose patients to risk. Here I propose a high level ethical framework – epistemic risk management - that provides guidance for which route of model updating should be taken based on the likelihood of the assumptions used during the creation of the original AI model and the assumptions required for causal inference holding true. This approach balances our uncertainty about the status of the AI as standard of care with the risk of not obtaining the necessary data, so as to increase the probability of benefiting current and future patients for whose care the AI is being used. PubDate: 2023-09-08 DOI: 10.1007/s10676-023-09721-x
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Reflective equilibrium is overdue a twenty-first century update. Despite its apparent popularity, there is scant evidence that theorists ever thoroughly implement the method, and fewer still openly and transparently publish their attempts to do so in print—stymying its supposed justificatory value. This paper proposes digitising reflective equilibrium as a solution. Inspired by the global open science movement, it advocates for coupling a novel, digital implementation of the equilibrating process with new publication norms that can capitalise on the inherent reproducibility of digital data. The argument is structured around three main claims: that digitising will make it easier to (a) methodically construct, (b) widely disseminate, and (c) thoroughly critique reflective equilibria. Practical guidance is also provided throughout. Altogether, it is argued that digitisation will not only help theorists to better realise reflective equilibrium’s latent theoretical potential, but also greatly extend its value as a justificatory device in contemporary academic discourses. PubDate: 2023-09-07 DOI: 10.1007/s10676-023-09722-w
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This article characterises the nature of cognitive warfare and its use of disinformation and computational propaganda and its political and military purposes in war and in conflict short of war. It discusses both defensive and offensive measures to counter cognitive warfare and, in particular, measures that comply with relevant moral principles. PubDate: 2023-09-04 DOI: 10.1007/s10676-023-09717-7
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this article, I argue that online echo chambers are in some cases and in some respects good. I do not attempt to refute arguments that they are harmful, but I argue that they are sometimes beneficial. In the first section, I argue that it is sometimes good to be insulated from views with which one disagrees. In the second section, I argue that the software-design principles that give rise to online echo chambers have a lot to recommend them. Further, the opposing principle, serendipity, could give rise to serious harm, in light of the conclusion of the first section that sometimes we are better off being insulated from some content online. In the third section, I argue that polarization can be a useful tool for inculcating the appropriate attitudes in a person. PubDate: 2023-08-22 DOI: 10.1007/s10676-023-09715-9
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Moral status arguments are typically formulated as descriptive statements that tell us something about the world. But philosophy of language teaches us that language can also be used performatively: we do things with words and use words to try to get others to do things. Does and should this theory extend to what we say about moral status, and what does it mean' Drawing on Austin, Searle, and Butler and further developing relational views of moral status, this article explores what it means to conceptualize moral status in a performative way and in a way that reaches deep down to its basis, proposes a distinction between strong and weak versions of this view, and discusses the implications for thinking about the moral status of humans and nonhumans. Next to contributing to theory about performativity in the moral domain in general, this effort helps to reveal, understand, and evaluate the performative dimension of utterances made in public discussions about the moral status of animals and artificial agents such as robots and artificial intelligence. PubDate: 2023-08-18 DOI: 10.1007/s10676-023-09719-5
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Using robots in the military contexts is problematic at many levels. There are social, legal, and ethical issues that should be discussed first before their wider deployment. In this paper, we focus on an additional problem: their human likeness. We claim that military robots should not look like humans. That design choice may bring additional risks that endanger human lives and by that contradicts the very justification for deploying robots at war, which is decreasing human deaths and injuries. We discuss two threats—epistemological and patient. Epistemological one is connected with the risk of mistaking robots for humans due to the limited ways of getting information about the external world, which may be amplified by the rush and need to fight with robots in distance. The patient threat is related to the developing attachment to robots, that in military contexts may cause additional deaths by the hesitance to sacrifice robots in order to save humans in peril or risking human life to save robots. PubDate: 2023-08-17 DOI: 10.1007/s10676-023-09718-6
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract When discussing AI alignment, we usually refer to the problem of teaching or training advanced autonomous AI systems to make decisions that are aligned with human values or preferences. Proponents of this approach believe it can be employed as means to stay in control over sophisticated intelligent systems, thus avoiding certain existential risks. We identify three general obstacles on the path to implementation of value alignment: a technological/technical obstacle, a normative obstacle, and a calibration problem. Presupposing, for the purposes of this discussion, that the technical and normative problems are solved, we focus on the problem of how to calibrate a system, for a specific value, to be on a specific location within a spectrum stretching between righteous and normal or average human behavior. Calibration, or more specifically mis-calibration, also raises the issue of trustworthiness. If we cannot trust AI systems to perform tasks the way we intended, we would not use them on our roads and at our homes. In an era where we strive to construct autonomous machines endowed with common sense, reasoning abilities and a connection to the world, so they would be able to act in alignment with human values, such mis-calibrations can make the difference between trustworthy and untrustworthy systems. PubDate: 2023-08-16 DOI: 10.1007/s10676-023-09716-8