A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> PHILOSOPHY (Total: 762 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Philosophy & Technology
Journal Prestige (SJR): 0.233
Citation Impact (citeScore): 1
Number of Followers: 13  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 2210-5433 - ISSN (Online) 2210-5441
Published by Springer-Verlag Homepage  [2467 journals]
  • Prediction via Similarity: Biomedical Big Data and the Case of Cancer
           Models

    • Free pre-print version: Loading...

      Abstract: In recent years, the biomedical field has witnessed the emergence of novel tools and modelling techniques driven by the rise of the so-called Big Data. In this paper, we address the issue of predictability in biomedical Big Data models of cancer patients, with the aim of determining the extent to which computationally driven predictions can be implemented by medical doctors in their clinical practice. We show that for a specific class of approaches, called k-Nearest Neighbour algorithms, the ability to draw predictive inferences relies on a geometrical, or topological, notion of similarity encoded in a well-defined metric, which determines how close the characteristics of distinct patients are on average. We then discuss the conditions under which the relevant models can yield reliable and trustworthy predictive outcomes.
      PubDate: 2023-01-26
       
  • Care Ethics and the Future of Work: a Different Voice

    • Free pre-print version: Loading...

      Abstract: The discourse on the future of work should learn from a turn in philosophy that occurred in the 1980s, one that recognizes the good life towards which ethics strives can only be reached on a foundation of caring relationships (Gillian, 1982; Noddings, 1984). Care ethics recognizes that human well-being is a group project, one that involves strong relationships, and concern for bodies and emotions. Too often, these features are left out of research exploring robotics in the workplace. This paper outlines the main tenets of care ethics, then applies the moral framework to the context of industrial and retail settings using robots. This approach sees these contexts through a relational lens, helping to identify, evaluate, and improve relationships critical to ensuring workers’ well-being. Second, care ethics considers taking care of people’s bodies beyond mere safety, examining how working with robots can exacerbate societal or economic pressures. Lastly, care ethics takes emotions as an important source of knowledge in building and supporting care. Additionally, this paper contributes to the care ethics literature by applying the framework to the context of robotized industrial workplaces, which has yet to be done.
      PubDate: 2023-01-26
       
  • Egalitarianism and Algorithmic Fairness

    • Free pre-print version: Loading...

      Abstract: What does it mean for algorithmic classifications to be fair to different socially salient groups' According to classification parity criteria, what is required is equality across groups with respect to some performance measure such as error rates. Critics of classification parity object that classification parity entails that achieving fairness may require us to choose an algorithm that makes no group better off and some groups worse off than an alternative. In this article, I interpret the problem of algorithmic fairness as a case concerning the ethics of the distribution of algorithmic classifications across groups (as opposed to, e.g., the fairness of data collection). I begin with a short introduction of algorithmic fairness as a problem discussed in machine learning. I then show how the criticism raised against classification parity is a form of leveling down objection, and I interpret the egalitarianism of classification parity as deontic egalitarianism. I then discuss a challenge to this interpretation and suggest a revision. Finally, I examine how my interpretation provides proponents of classification parity with a response to the leveling down criticism and how it relates to a recent suggestion to evaluate fairness for automated decision-making systems based on risk and welfare considerations from behind a veil of ignorance.
      PubDate: 2023-01-19
       
  • Correction to: The Responsibility Gap and LAWS: a Critical Mapping of the
           Debate

    • Free pre-print version: Loading...

      PubDate: 2023-01-18
       
  • Justifying a Capability Approach to Brain Computer Interface

    • Free pre-print version: Loading...

      Abstract: Previously, we introduced a capability approach to assess the responsible use of brain-computer interface. In this commentary, we say more about the ethical basis of our capability view and respond to three objections. The first objection holds that by stressing that capability lists are provisional and subject to change, we threaten the persistence of human dignity, which is tied to capabilities. The second objection states that we conflate capabilities and abilities. The third objection claims that the goal of using neuroenhancements should be preserving capabilities, not altering them.
      PubDate: 2023-01-07
       
  • The Responsibility Gap and LAWS: a Critical Mapping of the Debate

    • Free pre-print version: Loading...

      Abstract: AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature around the concept of responsibility gaps and different solutions have been devised to close or bridge these gaps. In order to move forward in the research around LAWS and the problem of responsibility, it is important to increase our understanding of the different perspectives and discussions in this debate. This paper attempts to do so by disentangling the various arguments and providing a critical overview. After giving a brief outline of the state of the technology of LAWS, I will review the debates over responsibility gaps using three differentiators: those who believe in the existence of responsibility gaps versus those who do not, those who hold that responsibility gaps constitute a new moral problem versus those who argue they do not, and those who claim that solutions can be successful as opposed to those who believe that it is an unsolvable problem.
      PubDate: 2023-01-05
       
  • Climate Change and the Terrible Hope

    • Free pre-print version: Loading...

      PubDate: 2022-12-23
       
  • Abilities, Capabilities, and Brain-Computer Interfaces: a Response to
           Jecker and Ko

    • Free pre-print version: Loading...

      Abstract: In a recent article, Jecker and Ko propose that a capabilities approach can be useful as an ethical framework for evaluating the use of BCI applications. Jecker and Ko defend this application, in part, because a capabilities list is not necessarily unchanging, but can account for rapid enhancements in human abilities. In this commentary, I argue that, though the capabilities approach is provisional, its primary relevance for BCI emerges from the ways in which capabilities remain constant amidst changing human abilities.
      PubDate: 2022-12-16
       
  • HoloFoldit and Hologrammatically Extended Cognition

    • Free pre-print version: Loading...

      Abstract: How does the integration of mixed reality devices into our cognitive practices impact the mind from a metaphysical and epistemological perspective' In his innovative and interdisciplinary article, “Minds in the Metaverse: Extended Cognition Meets Mixed Reality” (2022), Paul Smart addresses this underexplored question, arguing that the use of a hypothetical application of the Microsoft HoloLens called “the HoloFoldit” represents a technologically high-grade form of extended cognizing from the perspective of neo-mechanical philosophy. This short commentary aims to (1) carve up the conceptual landscape of possible objections to Smart’s argument and (2) elaborate on the possibility of hologrammatically extended cognition, which is supposed to be one of the features of the HoloFoldit case that distinguishes it from more primitive forms of cognitive extension. In tackling (1), I do not mean to suggest that Smart does not consider or have sufficient answers to these objections. In addressing (2), the goal is not to argue for or against the possibility of hologrammatically extended cognition but to reveal some issues in the metaphysics of virtual reality upon which this possibility hinges. I construct an argument in favor of hologrammatically extended cognition based on the veracity of virtual realism (Chalmers, 2017) and an argument against it based on the veracity of virtual fictionalism (McDonnell and Wildman, 2019).
      PubDate: 2022-12-15
       
  • Correction to: On Making Phenomenologies of Technology More
           Phenomenological

    • Free pre-print version: Loading...

      PubDate: 2022-12-14
       
  • Correction to: Technology as Driver for Morally Motivated Conceptual
           Engineering

    • Free pre-print version: Loading...

      PubDate: 2022-12-14
       
  • AI, Suicide Prevention and the Limits of Beneficence

    • Free pre-print version: Loading...

      Abstract: In this paper, we address the question of whether AI should be used for suicide prevention on social media data. We focus on algorithms that can identify persons with suicidal ideation based on their postings on social media platforms and investigate whether private companies like Facebook are justified in using these. To find out if that is the case, we start with providing two examples for AI-based means of suicide prevention in social media. Subsequently, we frame suicide prevention as an issue of beneficence, develop two fictional cases to explore the scope of the principle of beneficence and apply the lessons learned to Facebook’s employment of AI for suicide prevention. We show that Facebook is neither acting under an obligation of beneficence nor acting meritoriously. This insight leads us to the general question of who is entitled to help. We conclude that private companies like Facebook can play an important role in suicide prevention, if they comply with specific rules which we derive from beneficence and autonomy as core principles of biomedical ethics. At the same time, public bodies have an obligation to create appropriate framework conditions for AI-based tools of suicide prevention. As an outlook we depict how cooperation between public and private institutions can make an important contribution to combating suicide and, in this way, put the principle of beneficence into practice.
      PubDate: 2022-11-28
       
  • Decolonization of AI: a Crucial Blind Spot

    • Free pre-print version: Loading...

      Abstract: Critics are calling for the decolonization of AI (artificial intelligence). The problem is that this technology is marginalizing other modes of knowledge with dehumanizing applications. What is needed to remedy this situation is the development of human-centric AI. However, there is a serious blind spot in this strategy that is addressed in this paper. The corrective that is usually proposed—participatory design—lacks the philosophical rigor to undercut the autonomy of AI, and thus the colonization spawned by this technology. A more radical or substantial proposal is advanced in this discussion that is known as community-based design. This alternative makes a theoretical maneuver that allows AI design to be directed by human agency, thereby introducing a safeguard that may help to prevent colonization by this technology.
      PubDate: 2022-11-28
       
  • The Unique and Practical Advantages of Applying A Capability Approach to
           Brain Computer Interface

    • Free pre-print version: Loading...

      Abstract: Intelligent neurotechnology is an emerging field that combines neurotechnologies like brain-computer interface (BCI) with artificial intelligence. This paper introduces a capability framework to assess the responsible use of intelligent BCI systems and provide practical ethical guidance. It proposes two tests, the threshold and flourishing tests, that BCI applications must meet, and illustrates them in a series of cases. After a brief introduction (Section 1), Section 2 sets forth the capability view and the two tests. It illustrates the threshold test using examples from clinical medicine of BCI applications that enable patients with profound disabilities to function at a threshold level through computer mediation. Section 3 illustrates the flourishing test by exploring possible future applications of BCI involving neuroenhancements for healthy people, using examples adapted from research currently underway in the US military. Section 3 applies a capability lens to a complex case involving dual effects, both therapeutic and non-therapeutic, showing how the threshold and flourishing tests resolve the case. Section 4 replies to three objections: neurorights are the best tool for assessing BCI; the two tests are moving targets; and the analysis utilizes a capability view to do work it is not designed for. The paper concludes that a capability view offers unique advantages and gives practical guidance for evaluating the responsible use of present and future BCI applications. Extrapolating from our analysis may help guide other emerging technologies, such as germline gene editing, expected to impact central human capabilities.
      PubDate: 2022-11-23
       
  • Justice and the Normative Standards of Explainability in Healthcare

    • Free pre-print version: Loading...

      Abstract: Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of artificial intelligent decision-support systems (AI-DSS). Such algorithmic decision-making, however, is mostly developed in resource- and expert-abundant settings to support healthcare experts in their work. As a practical consequence, the normative standards and requirements for such algorithmic decision-making in healthcare require the technology to be at least as explainable as the decisions made by the experts themselves. The goal of providing healthcare in settings where resources and expertise are scarce might come with a normative pull to lower the normative standards of using digital technologies in order to provide at least some healthcare in the first place. We scrutinize this tendency to lower standards in particular settings from a normative perspective, distinguish between different types of absolute and relative, local and global standards of explainability, and conclude by defending an ambitious and practicable standard of local relative explainability.
      PubDate: 2022-11-23
       
  • Alienation in a World of Data. Toward a Materialist Interpretation of
           Digital Information Technologies

    • Free pre-print version: Loading...

      Abstract: The essay proposes to use alienation as a heuristic and conceptual tool for the analysis of the impact of digital information and communication technologies (ICTs) on users. It follows a historical materialist understanding, according to which data can be considered as things produced in an industrial fashion. A representational interpretation, according to which data would merely reflect a given reality, is untenable. It will be argued instead to understand data as an additional layer which has a transformative impact on reality as a whole. The analysis follows the early Marx in thinking of alienation as a structural condition which holds even if it is not experienced psychologically by users. Alienation means that the desire to express oneself and achieve social recognition, self-realization, and participation through the use of ICTs is incongruent to the inherent purposes and functions of these technologies. Digital platforms create the ideological illusion that users contribute to the production of data solely for their own gratification. Users, in turn, react to structural alienation either by internalizing it, in which case they may become enthusiastic data producers, or by imposing alienating conditions on other users. Finally, it will be shown that a strong notion of privacy is less suited to articulate concerns about the impact of ICTs because it expresses a desire for privilege and individual control which has become unfulfillable in the environment of data industries. The essay closes by showing some of the changes in institutional structures and behavior that could follow from the focus on alienation.
      PubDate: 2022-11-18
       
  • Correction to: Escaping the Impossibility of Fairness: From Formal to
           Substantive Algorithmic Fairness

    • Free pre-print version: Loading...

      PubDate: 2022-11-07
      DOI: 10.1007/s13347-022-00593-5
       
  • Introduction to the Topical Collection on AI and Responsibility

    • Free pre-print version: Loading...

      PubDate: 2022-11-02
      DOI: 10.1007/s13347-022-00583-7
       
  • Ethical Redress of Racial Inequities in AI: Lessons from Decoupling
           Machine Learning from Optimization in Medical Appointment Scheduling

    • Free pre-print version: Loading...

      Abstract: An Artificial Intelligence algorithm trained on data that reflect racial biases may yield racially biased outputs, even if the algorithm on its own is unbiased. For example, algorithms used to schedule medical appointments in the USA predict that Black patients are at a higher risk of no-show than non-Black patients, though technically accurate given existing data that prediction results in Black patients being overwhelmingly scheduled in appointment slots that cause longer wait times than non-Black patients. This perpetuates racial inequity, in this case lesser access to medical care. This gives rise to one type of Accuracy-Fairness trade-off: preserve the efficiency offered by using AI to schedule appointments or discard that efficiency in order to avoid perpetuating ethno-racial disparities. Similar trade-offs arise in a range of AI applications including others in medicine, as well as in education, judicial systems, and public security, among others. This article presents a framework for addressing such trade-offs where Machine Learning and Optimization components of the algorithm are decoupled. Applied to medical appointment scheduling, our framework articulates four approaches intervening in different ways on different components of the algorithm. Each yields specific results, in one case preserving accuracy comparable to the current state-of-the-art while eliminating the disparity.
      PubDate: 2022-10-20
      DOI: 10.1007/s13347-022-00590-8
       
  • Virtual Reality, Embodiment, and Allusion: an Ecological-Enactive Approach

    • Free pre-print version: Loading...

      Abstract: It is common in the cognitive and computational sciences to regard virtual reality (VR) as composed of illusory experiences, given its immersive character. In this paper, we adopt an ecological-enactive perspective on cognition (Sect. 3) to evaluate the nature of VR and one’s engagement with it. Based on a post-cognitivist conception of illusion, we reject the commonly held assumption that virtual reality experiences (VREs) are illusory (Sect. 4). Our positive take on this issue is that VR devices, like other technological devices, can be embodied during use, which is why they can be the source of experiences (Sect. 5). We then propose a new concept to interpret VREs, namely, allusion, which means that the subject acts as if the virtual experiences are real (Sect. 6). This kind of engagement has a volitional aspect, which is evident in the onboarding of VR devices and which allows us to distinguish VREs from other experiences. We conclude that, even though we have experiences that afford certain interactions in VR, the strong continuity between cognitive and biological processes is not sustained therein. This characterizes a difference of kind—given the current state of technology—between VREs and fully fledged cognitive states, which nevertheless allows for constrained experimentation in cognitive science.
      PubDate: 2022-10-20
      DOI: 10.1007/s13347-022-00589-1
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.236.70.233
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-