A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> SOCIOLOGY (Total: 553 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Philosophy & Technology
Journal Prestige (SJR): 0.233
Citation Impact (citeScore): 1
Number of Followers: 13  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 2210-5433 - ISSN (Online) 2210-5441
Published by Springer-Verlag Homepage  [2469 journals]
  • Ethical Idealism, Technology and Practice: a Manifesto

    • Free pre-print version: Loading...

      Abstract: Abstract Technology has become one of the main channels through which people engage in most of their everyday activities. When working, learning, or socializing, the affordances created by technological tools determine the way in which users interact with one another and their environment, thus favoring certain actions and behaviors, while discouraging others. The ethical dimension behind the use of technology has been already studied in recent works, but the question is often formulated in a protective way that focuses on shielding the users from potential detrimental effects. Nevertheless, when considering collateral ethical benefits that the use of technology could bring about, virtue ethics and the notions of “practice” and “practical wisdom” present new opportunities to harness this potential. By understanding the combination of technology, its users and their interactions as a system, technology can be seen as the space where most of its users’ daily practice happens. Through this practice, users can get the chance to collaterally develop and enhance their ethical awareness, sensitivity and reasoning capabilities. This work is shaped as a manifesto that provides the background, motivations and directions needed to ask a complementary question about the ethics of technology that aims towards the potentiality behind the use of technology. Instead of focusing on shielding the users, the proposed ethical idealist approach to the ethics of technology aims to empower them by understanding their use of technology as the space where the development of their practical wisdom, understood in the virtue ethics’ sense, takes place.
      PubDate: 2022-09-09
       
  • Correction to: Big Tech and Antitrust: An Ordoliberal Analysis

    • Free pre-print version: Loading...

      PubDate: 2022-09-06
       
  • Technology and French Thought: a Dialogue Between Jean-Luc Nancy and
           François-David Sebbah

    • Free pre-print version: Loading...

      Abstract: Abstract This paper is not an article in a regular sense. It is a dialogue between François-David Sebbah, one of the two editors of this topical collection, and Jean-Luc Nancy, one of the most eminent representatives of the contemporary French Thought. This dialogue took place in the first half of 2022 in a written form, because of the sanitary restrictions related to the COVID-19 pandemic and because Nancy was heavily sick. Sebbah sent to Nancy a text, corresponding to Section 2.1, and Nancy responded to it with another text, corresponding to Section 2.2. Unfortunately, Nancy died on August 23, 2022, and could not revise his own text nor pursue the dialogue, as it was originally planned. For this reason, an introductory clarification by Sebbah, corresponding to Section 1, has been added. The purpose of such clarification is to introduce the reader to Nancy’s philosophy of technology—although technology never had a central role in Nancy’s reflections. In Section 2.1, Sebbah proposes a distinction between “French Theory,” “French Thought,” and “French Philosophy.” He also proposes a list of twelve possible intersections between the French Thought and the philosophy of technology. In Section 2.2, Nancy criticizes the use of expressions such as “French Thought.” He also insists, in a Heideggerian vein, on the fact that Technology (with a capital “T”) does not depend on human ends but has its own ends.
      PubDate: 2022-08-31
       
  • Real Fakes: The Epistemology of Online Misinformation

    • Free pre-print version: Loading...

      Abstract: Abstract Many of our beliefs are acquired online. Online epistemic environments are replete with fake news, fake science, fake photographs and videos, and fake people in the form of trolls and social bots. The purpose of this paper is to investigate the threat that such online fakes pose to the acquisition of knowledge. I argue that fakes can interfere with one or more of the truth, belief, and warrant conditions on knowledge. I devote most of my attention to the effects of online fakes on satisfaction of the warrant condition, as these have received comparatively little attention. I consider three accounts of the conditions under which fakes compromise the warrant condition. I argue for the third of these accounts, according to which the propensity of fakes to exist in an environment threatens warrant acquisition in that environment. Finally, I consider some limitations on the epistemic threat of fakes and suggest some strategies by which this threat can be mitigated.
      PubDate: 2022-08-31
       
  • Socially Disruptive Technologies, Contextual Integrity, and Conservatism
           About Moral Change

    • Free pre-print version: Loading...

      Abstract: Abstract This commentary is a response to Contextual Integrity as a General Conceptual Tool for Evaluating Technological Change by Elizabeth O’Neill (Philosophy & Technology (2022)). It argues that while contextual integrity (CI) might be an useful addition to the toolkit of approaches for ethical technology assessment, a CI approach might not be able to uncover all morally relevant impacts of technological change. Moreover, the inherent conservatism of a CI approach might be problematic in cases in which we encounter new kinds of morally problematic situations, such as climate change, or when technology reinforces historically grown injustices.
      PubDate: 2022-08-30
       
  • Algorithmic Political Bias Can Reduce Political Polarization

    • Free pre-print version: Loading...

      Abstract: Abstract Does algorithmic political bias contribute to an entrenchment and polarization of political positions' Franke (Philosophy and Technology, 35, 7, 2022) argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism (what I shall call ‘implied political labeling’) that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that algorithmic political classifications entrench political identities, I contend that they may often produce the opposite result. They can lead people to change in ways that disconfirm the classifications (thus causing ‘looping effects’). Consequently and counterintuitively, algorithmic political bias can in fact decrease political entrenchment and polarization.
      PubDate: 2022-08-23
       
  • Democracy in the Time of “Hyperlead”: Knowledge Acquisition via
           Algorithmic Recommendation and Its Political Implication in Comparison
           with Orality, Literacy, and Hyperlink

    • Free pre-print version: Loading...

      Abstract: Abstract Why hasn’t democracy been promoted by nor ICT been controlled by democratic governance' To answer this question, this research begins its investigation by comparing knowledge acquisition systems throughout history: orality, literacy, hyperlink, and hyperlead. “Hyperlead” is a newly coined concept to emphasize the passivity of people when achieving knowledge and information via algorithmic recommendation technologies. Subsequently, the four systems are compared in terms of their epistemological characteristics and political implications. It is argued that, while literacy and hyperlink contributed to the furthering of democracy, hyperlead poses a fundamental challenge to it, undermining human autonomy to make decisions and aggravating vertical and lateral polarizations. In addition, the similarity between orality and hyperlead is addressed. Finally, suggestions to improve or to advert the current trend are provided. What happened during the transition period from orality to literacy and subsequently to hyperlink could be a reference for an alternative to hyperlead. Some technical adjustments and appropriate regulations requiring more transparency in algorithmic recommendation systems would help us to overcome hyperlead and preserve human autonomy.
      PubDate: 2022-08-13
       
  • Contextual Integrity as a General Conceptual Tool for Evaluating
           Technological Change

    • Free pre-print version: Loading...

      Abstract: Abstract The fast pace of technological change necessitates new evaluative and deliberative tools. This article develops a general, functional approach to evaluating technological change, inspired by Nissenbaum’s theory of contextual integrity. Nissenbaum (2009) introduced the concept of contextual integrity to help analyze how technological changes can produce privacy problems. Reinterpreted, the concept of contextual integrity can aid our thinking about how technological changes affect the full range of human concerns and values—not only privacy. I propose a generalized concept of contextual integrity that is applicable to a broader variety of circumstances, and I outline a new, general procedure for technological evaluation. Among the attractive features of the proposed approach to evaluating technological change are its context-sensitivity, adaptability, and principled presumptive conservatism, enabled by the mechanism the approach supplies for reevaluating existing practices, norms, and values.
      PubDate: 2022-08-12
       
  • AI Ethics and the Automation Industry: How Companies Respond to Questions
           About Ethics at the automatica Trade Fair 2022

    • Free pre-print version: Loading...

      Abstract: Abstract Against the backdrop of a recent history of ongoing efforts to institutionalize ethics in ways that also target corporate environments, we asked ourselves: How do company representatives at the automatica 2022 trade fair in Munich respond to questions around ethics' To this end, we started an exploratory survey at the automatica 2022 in Munich, asking 22 company representatives at various booths from various industrial sectors the basic question: “Is there somebody in your company working on ethics'” Most representatives were responding positively and tried to connect the term to pre-existing practices, processes, or organizational entities in their respective companies. Mostly, they either located ethics as being relevant to their organization on an institutional level, on a cultural level, on an inter-company level, or on a product level. This exploratory investigation has also shown that the ongoing debates and regulatory efforts about ethics in AI have not yet become a major selling point for company representatives at the trade fair.
      PubDate: 2022-08-05
       
  • How to Do Things with Information Online. A Conceptual Framework for
           Evaluating Social Networking Platforms as Epistemic Environments

    • Free pre-print version: Loading...

      Abstract: Abstract This paper proposes a conceptual framework for evaluating how social networking platforms fare as epistemic environments for human users. I begin by proposing a situated concept of epistemic agency as fundamental for evaluating epistemic environments. Next, I show that algorithmic personalisation of information makes social networking platforms problematic for users’ epistemic agency because these platforms do not allow users to adapt their behaviour sufficiently. Using the tracing principle inspired by the ethics of self-driving cars, I operationalise it here and identify three requirements that automated epistemic environments need to fulfil: (a) the users need to be afforded a range of skilled actions; (b) users need to be sensitive to the possibility to use their skills; (c) the habits built when adapting to the platform should not undermine the user’s pre-existing skills. I then argue that these requirements are almost impossible to fulfil all at the same time on current SN platforms; yet nevertheless, we need to pay attention to these whenever we evaluate an epistemic environment with automatic features. Finally, as an illustration, I show how Twitter, a popular social networking platform, will fare regarding these requirements.
      PubDate: 2022-08-05
       
  • Accepting Moral Responsibility for the Actions of Autonomous Weapons
           Systems—a Moral Gambit

    • Free pre-print version: Loading...

      Abstract: Abstract In this article, we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way and which is accepted by individuals as an assessment of their own moral character. We argue that, given the unpredictability of AWS, meaningful moral responsibly can only be discharged by human agents who are willing to take a moral gambit: they decide to design/develop/deploy AWS despite the uncertainty about the effects an AWS may produce, hoping that unintended and unwanted or unforeseen outcomes may never occurs, but also accepting to be held responsible if such outcomes will occur. We argue that, while a moral gambit is permissible for the use of non-lethal AWS, this is not the case for the actions of lethal autonomous weapon systems.
      PubDate: 2022-08-05
       
  • Correction to: Autonomous Artificial Intelligence and Liability: a Comment
           on List

    • Free pre-print version: Loading...

      PubDate: 2022-08-01
       
  • Algorithmic Political Bias—an Entrenchment Concern

    • Free pre-print version: Loading...

      Abstract: Abstract This short commentary on Peters (Philosophy & Technology 35, 2022) identifies the entrenchment of political positions as one additional concern related to algorithmic political bias, beyond those identified by Peters. First, it is observed that the political positions detected and predicted by algorithms are typically contingent and largely explained by “political tribalism”, as argued by Brennan (2016). Second, following Hacking (1999), the social construction of political identities is analyzed and it is concluded that algorithmic political bias can contribute to such identities. Third, following Nozick (1989), it is argued that purist political positions may stand in the way of the pursuit of all worthy values and goals to be pursued in the political realm and that to the extent that algorithmic political bias entrenches political positions, it also hinders this healthy “zigzag of politics”.
      PubDate: 2022-08-01
       
  • Metaverse: a Matter of Experience

    • Free pre-print version: Loading...

      PubDate: 2022-07-30
       
  • Perceptions of Beauty in Security Ceremonies

    • Free pre-print version: Loading...

      Abstract: Abstract When we use secure computer systems, we engage with carefully orchestrated and ordered interactions called “security ceremonies”, all of which exist to assure security. A great deal of attention has been paid to improving the usability of these ceremonies over the last two decades, to make them easier for end-users to engage with. Yet, usability improvements do not seem to have endeared end users to ceremonies. As a consequence, human actors might subvert the ceremony’s processes or avoid engaging with it. Here, we consider whether beautification could be one way of making ceremonies more appealing. To explore beautification in this context, we carried out three studies. Study 1 surveyed 250 participants to derive a wide range of potential dimensions of “beautiful ceremonies”. These statements were sorted into dominant themes and converted into statements, which fed into the second study, with 309 respondents, to reveal the dominant dimensions constituting beauty. Study 3 asked 41 participants to carry out a Q-sort, which revealed the ways that people combine the identified dimensions when characterising security ceremonies as “beautiful”. These studies have allowed us to pin down the perceived dimensions of beauty in the context of security ceremonies, and also to understand how people combine these dimensions in different ways in judging security ceremonies to be beautiful, confirming the old adage of beauty being “in the eye of the beholder”. We conclude by highlighting the constraints imposed by the overarching requirement for security to be maintained in the face of any usability improvements and beautification endeavours.
      PubDate: 2022-07-30
       
  • Technology as Driver for Morally Motivated Conceptual Engineering

    • Free pre-print version: Loading...

      Abstract: Abstract New technologies are the source of uncertainties about the applicability of moral and morally connotated concepts. These uncertainties sometimes call for conceptual engineering, but it is not often recognized when this is the case. We take this to be a missed opportunity, as a recognition that different researchers are working on the same kind of project can help solve methodological questions that one is likely to encounter. In this paper, we present three case studies where philosophers of technology implicitly engage in conceptual engineering (without naming it as such). We subsequently reflect on the case studies to find out how these illustrate conceptual engineering as an appropriate method to deal with pressing concerns in the philosophy of technology. We have two main goals. We first want to contribute to the literature on conceptual engineering by presenting concrete examples of conceptual engineering in the philosophy of technology. This is especially relevant, because the technologies that are designed based on the conceptual work done by philosophers of technology potentially have crucial moral and social implications. Secondly, we want to make explicit what choices are made when doing this conceptual work. Making explicit that some of the implicit assumptions are, in fact, debated in the literature allows for reflection on these questions. Ultimately, our hope is that conscious reflection leads to an improvement of the conceptual work done.
      PubDate: 2022-07-28
       
  • Social Media and its Negative Impacts on Autonomy

    • Free pre-print version: Loading...

      Abstract: Abstract How social media impacts the autonomy of its users is a topic of increasing focus. However, much of the literature that explores these impacts fails to engage in depth with the philosophical literature on autonomy. This has resulted in a failure to consider the full range of impacts that social media might have on autonomy. A deeper consideration of these impacts is thus needed, given the importance of both autonomy as a moral concept and social media as a feature of contemporary life. By drawing on this philosophical literature, we argue that autonomy is broadly a matter of developing autonomy competencies, having authentic ends and control over key aspects of your own life, and not being manipulated, coerced, and controlled by others. We show how the autonomy of users of social media can be disrespected and harmed through the control that social media can have over its users’ data, attention, and behaviour. We conclude by discussing various recommendations to better regulate social media.
      PubDate: 2022-07-27
       
  • Intelligence as a Social Concept: a Socio-Technological Interpretation of
           the Turing Test

    • Free pre-print version: Loading...

      Abstract: Abstract Alan Turing’s 1950 imitation game has been widely understood as a means for testing if an entity is intelligent. Following a series of papers by Diane Proudfoot, I offer a socio-technological interpretation of Turing’s paper and present an alternative way of understanding both the imitation game and Turing’s concept of intelligence. Turing, I claim, saw intelligence as a social concept, meaning that possession of intelligence is a property determined by society’s attitude toward the entity. He realized that as long as human society held a prejudiced attitude toward machinery—seeing machines a priori as mindless objects—machines could not be said to be intelligent, by definition. He also realized, though, that if humans’ a priori, chauvinistic attitude toward machinery changed, the existence of intelligent machines would become logically possible. Turing thought that such a change would eventually occur: He believed that when scientists overcome the technological challenge of constructing sophisticated machines that could imitate human verbal behavior—i.e., do well in the imitation game—humans’ prejudiced attitude toward machinery will have altered in such a way that machines could be said to be intelligent. The imitation game, for Turing, was not an intelligence test, but a technological aspiration whose realization would likely involve a change in society’s attitude toward machines.
      PubDate: 2022-07-22
       
  • Transparency as Manipulation' Uncovering the Disciplinary Power of
           Algorithmic Transparency

    • Free pre-print version: Loading...

      Abstract: Abstract Automated algorithms are silently making crucial decisions about our lives, but most of the time we have little understanding of how they work. To counter this hidden influence, there have been increasing calls for algorithmic transparency. Much ink has been spilled over the informational account of algorithmic transparency—about how much information should be revealed about the inner workings of an algorithm. But few studies question the power structure beneath the informational disclosure of the algorithm. As a result, the information disclosure itself can be a means of manipulation used by a group of people to advance their own interests. Instead of concentrating on information disclosure, this paper examines algorithmic transparency from the perspective of power, explaining how algorithmic transparency under a disciplinary power structure can be a technique of normalizing people’s behavior. The informational disclosure of an algorithm can not only set up some de facto norms, but also build a scientific narrative of its algorithm to justify those norms. In doing so, people would be internally motivated to follow those norms with less critical analysis. This article suggests that we should not simply open the black box of an algorithm without challenging the existing power relations.
      PubDate: 2022-07-22
       
  • Why There Are Still Moral Reasons to Prefer Extended over Embedded: a
           (Short) Reply to Cassinadri

    • Free pre-print version: Loading...

      Abstract: Abstract In a recent paper, Cassinadri (2022) raised substantial criticism about the possibility of using moral reasons to endorse the hypothesis of extended cognition (EXT) over its most popular alternative, the embedded view (EMB). In particular, Cassinadri criticized 4 of the arguments we formulated to defend EXT and argued that our claim that EXT might be preferable to EMB (on the grounds of its progressiveness and inclusiveness) does not stand close scrutiny. In this short reply, we point out—contra Cassinadri—why we still believe that there are moral reasons to prefer EXT over EMB, hence why we think that the former is more inclusive and more progressive than the latter.
      PubDate: 2022-07-20
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 44.201.95.84
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-