Subjects -> SOCIOLOGY (Total: 553 journals)
| A B C D E F G H I J K L M N O P Q R S T U V W X Y Z | The end of the list has been reached or no journals were found for your choice. |
|
|
- Digital Civics and Algorithmic Citizenship in a Global Scenario
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: How should the notion of civics be rethought in the digital age and within the infosphere' The commentary addresses this question by focusing on two main issues. The first part delves into the effects of the dynamics of “surveillance capitalism” and datafication processes on the possible developments of the idea of civics in the digital sphere. It stresses the need to set the issue of users’ data rights at the center of digital civic initiatives. The second part explores the notions of political community and membership that should underpin the idea of digital civics. It argues that the diverse range of values and virtues which shape civics on a global scale should be taken into consideration to define the spatial scope—or political geography—of the community to which digital civic initiatives are addressed. PubDate: 2023-05-22
- Should People Have a Right Not to Be Subjected to AI Profiling based on
Publicly Available Data' A Comment on Ploug-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Several studies have documented that when presented with data from social media platforms machine learning (ML) models can make accurate predictions about users, e.g., about whether they are likely to suffer health-related conditions such as depression, mental disorders, and risk of suicide. In a recent article, Ploug (Philos Technol 36:14, 2023) defends a right not to be subjected to AI profiling based on publicly available data. In this comment, I raise some questions in relation to Ploug’s argument that I think deserves further discussion. PubDate: 2023-05-18
- When the Poem Must Come to an End: Reply to Amzallag’s Commentary on the
Coexistence of Technopoiesis and Technopraxis-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Our recent paper examines the relevancy of the latest dual conception of technopoiesis and technopraxis, the former denoting a situation of the prevailing, manifest importance of a technical process upon its end-product and the latter the overall approach of technology conditioned by a dominantly teleological perspective, proposing the idea of techno-onto-poiesis (counterbalanced by techno-ontic-praxis) that points to the efflorescence of techniques (and their material and metaphysical potentiality) that are classified typologically rather than sequentially. Alongside Edgerton’s view, the response calls on the historians of ancient and modern technology to weigh in on the phenomena of technological changes and alternations, instead of essentializing the “old” and “new,” the “innovative” and the “conservative,” and discusses the potential challenges as we move forwards in the study of the history of measurement and measuring thinking. PubDate: 2023-05-08
- The Future of Work: Augmentation or Stunting'
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The last decade has seen significant improvements in artificial intelligence (AI) technologies, including robotics, machine vision, speech recognition, and text generation. Increasing automation will undoubtedly affect the future of work, and discussions on how the development of AI in the workplace will impact labor markets often include two scenarios: (1) labor replacement and (2) labor enabling. The former involves replacing workers with machines, while the latter assumes that human–machine cooperation can significantly improve worker productivity. In this context, it is often argued that (1) could lead to mass unemployment and that (2) therefore would be more desirable. We argue, however, that the labor-enabling scenario conflates two distinct possibilities. On the one hand, technology can increase productivity while also promoting “the goods of work,” such as the opportunity to pursue excellence, experience a sense of community, and contribute to society (human augmentation). On the other hand, higher productivity can also be achieved in a way that reduces opportunities for the “goods of work” and/or increases “the bads of work,” such as injury, reduced physical and mental health, reduction of autonomy, privacy, and human dignity (human stunting). We outline the differences of these outcomes and discuss the implications for the labor market in the context of contemporaneous discussions on the value of work and human wellbeing. PubDate: 2023-05-04
- Liars and Trolls and Bots Online: The Problem of Fake Persons
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper describes the ways in which trolls and bots impede the acquisition of knowledge online. I distinguish between three ways in which trolls and bots can impede knowledge acquisition, namely, by deceiving, by encouraging misplaced skepticism, and by interfering with the acquisition of warrant concerning persons and content encountered online. I argue that these threats are difficult to resist simultaneously. I argue, further, that the threat that trolls and bots pose to knowledge acquisition goes beyond the mere threat of online misinformation, or the more familiar threat posed by liars offline. Trolls and bots are, in effect, fake persons. Consequently, trolls and bots can systemically interfere with knowledge acquisition by manipulating the signals whereby individuals acquire knowledge from one another online. I conclude with a brief discussion of some possible remedies for the problem of fake persons. PubDate: 2023-05-03
- Between Fiction, Reality, and Ideality: Virtual Objects as Computationally
Grounded Intentional Objects-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Virtual objects, such as online shops, the elements that go to make up virtual life in computer games, virtual maps, e-books, avatars, cryptocurrencies, chatbots, holograms, etc., are a phenomenon we now encounter at every turn: they have become a part of our life and our world. Philosophers—and ontologists in particular—have sought to answer the question of what, exactly, they are. They fall into two camps: some, pointing to the chimerical character of virtuality, hold that virtual objects are like dreams, illusions and fictions, while others, citing the real impact of virtuality on our world, take them to be real—an actual part of the real world, just like other real objects. In this article, we defend the thesis that both sides are wrong. Using Roman Ingarden’s phenomenological ontology, we advocate a position according to which a virtual object is a computationally grounded intentional object that has its existential foundation in computational processes, which are compliant with a certain model of computation. We point out that virtuality is framed by some kind of ideal mathematical objects: i.e., mathematical models of computation, which in turn fall, each of them, under their respective ideas. We also refer to the idea of natural computation, which in conjunction with the ontological analysis carried out leads to the thesis that an object can be more or less virtual. PubDate: 2023-05-01
- Towards the End of the Designer Fallacy: How the Internet Empowers
Designers over Users-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Multistability—the plurality of meanings of technological artifacts—is an emancipatory phenomenon insofar as it allows the user to freely appropriate the object according to his or her interests, even against the will of the designer. The objective of this article is to show how the trend to connect physical and digital artifacts to the Internet poses a danger to the freedom that there is in multistability. By reducing the traditional separation between the artifact and the designer, the connection of the artifact to the Internet allows the designer to continually modify the software that governs or constitutes it, which involves a relative loss of power for the user to determine its meaning. This change, in favor of the designer in the correlation of power among the actors—designer, artifact, and user—from whose interplay the meaning of artifacts arises, favors concentric stabilities and hinders eccentric ones with respect to the will of the designer. Thus, we should rethink the truth value of the designer fallacy, that is, the claim that the meaning of artifacts is determined only by the designer. From a political point of view, the remote control of artifacts and their multistability is an effective pedagogical tool to educate human beings in a time in which texts, according to Sloterdijk, no longer serve this function. PubDate: 2023-04-29
- Rethinking Remote Work, Automated Technologies, Meaningful Work and the
Future of Work: Making a Case for Relationality-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Remote work, understood here as a working environment different from the traditional office working space, is a phenomenon that has existed for many years. In the past, workers voluntarily opted, when they were allowed to, to work remotely rather than commuting to their traditional work environment. However, with the emergence of the global pandemic (corona virus-COVID-19), people were forced to work remotely to mitigate the spread of the virus. Consequently, researchers have identified some benefits and adverse effects of remote work, especially in the age of COVID-19, ranging from flexible time and environment to technostress and isolation. In this paper, using a phenomenological approach, specifically, the sub-Saharan African experiences, I contend that remote work in the age of advanced technologies has obscured the value of relationality due to the problem of isolation in sub-Saharan African workplaces. For sub-Saharan Africans, relationality is a prerequisite moral value to becoming a person. In addition, relationality contributes to meaningfulness in the workspace. Obscuring the value of relationality in the aforementioned locale leads to meaninglessness in the workspace. Furthermore, this paper contributes to the existing literature on meaningful work by critically showing the importance of the value of relationality as a key element that adds meaning to work in sub-Saharan Africa. PubDate: 2023-04-28
- Toward an Enactive Conception of Productive Practices: Beyond Material
Agency-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: We examine the question of material agency as raised in material engagement theory (MET). Insofar as MET tends to highlight the causal roles played by extra-bodily material flows in human practices, the term “material agency” does not sufficiently distinguish cases in which these flows are part of an agentive engagement from cases in which they are not. We propose an operational criterion to effect such a distinction. We claim this criterion is organizational, i.e., systemic, and not causal. In the enactive account, agency requires three organizational conditions: self-individuation, interactional asymmetry, and normativity. These conditions can have organic, sensorimotor, and sociomaterial realizations. The dance of human productive practices is indeed spread between brains, bodies, and the world, as MET claims, but it is distributed in an organized manner that involves constraints and norms at various scales. We put forward a relational and non-anthropocentric perspective toward an enactive approach to productive practices. We discuss some aspects of agentive ensembles rendered more intelligible by our proposal, including incorporation, soft assembly and non-decomposability, and the grounding of teleology normative processes at multiple scales. In this manner, we seek to continue the dialog between MET and enactive theory, beginning with the view that a situated system must realize certain minimal organizational conditions to be called an agent. PubDate: 2023-04-26
- Neuroadaptive Technology and the Self: a Postphenomenological Perspective
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Neuroadaptive technology (NAT) is a closed-loop neurotechnology designed to enhance human–computer interaction. NAT works by collecting neurophysiological data, which are analysed via autonomous algorithms to create actions and adaptations at the user interface. This paper concerns how interaction with NAT can mediate self-related processing (SRP), such as self-awareness, self-knowledge, and agency. We begin with a postphenomenological analysis of the NAT closed loop to highlight the built-in selectivities of machine hermeneutics, i.e., autonomous chains of algorithms that convert data into an assessment of psychological states/intentions. We argue that these algorithms produce an assessment of lived experience that is quantitative, reductive, and highly simplistic. This reductive assessment of lived experience is presented to the user via feedback at the NAT interface and subsequently mediates SRP. It is argued that congruence between system feedback and SRP determines the precise character of the alterity relation between human user and system. If feedback confirms SRP, the technology is regarded as a quasi-self. If there is a disagreement between SRP and feedback from the system, NAT is perceived to be a quasi-other. We argue that the design of the user interface shapes the precise ways in which NAT can mediate SRP. PubDate: 2023-04-26
- On the Coexistence of Technopoiesis and Technopraxis: Comments on the
Paper “Refining Technopoiesis: Measures and Measuring Thinking in Ancient China”-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Technopoiesis was previously identified as the juvenile phase of expression of a technique that spontaneously evolves towards technopraxis as soon as the perspectives of practical use of the end-products overcome the cosmological resonance of the process itself. This view is re-examined considering the data and analyses exposed in “Refining technopoiesis: Measures and Measuring Thinking in Ancient China,” in which a coexistence of the technopoiesis and technopraxis approaches of technics is suggested. PubDate: 2023-04-24
- On the Philosophy of Unsupervised Learning
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and ontological questions, providing data-driven tools for discovering natural kinds and distinguishing essence from contingency. This analysis goes some way toward filling the lacuna in contemporary philosophical discourse on unsupervised learning, as well as bringing conceptual unity to a heterogeneous field more often described by what it is not (i.e., supervised or reinforcement learning) than by what it is. I submit that unsupervised learning is not just a legitimate subject of philosophical inquiry but perhaps the most fundamental branch of all AI. However, an uncritical overreliance on unsupervised methods poses major epistemic and ethical risks. I conclude by advocating for a pragmatic, error-statistical approach that embraces the opportunities and mitigates the challenges posed by this powerful class of algorithms. PubDate: 2023-04-21
- In Conversation with Artificial Intelligence: Aligning language Models
with Human Values-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values' Which norms or values should they be aligned with' And how can this be accomplished' In this paper, we propose a number of steps that help answer these questions. We start by developing a philosophical analysis of the building blocks of linguistic communication between conversational agents and human interlocutors. We then use this analysis to identify and formulate ideal norms of conversation that can govern successful linguistic communication between humans and conversational agents. Furthermore, we explore how these norms can be used to align conversational agents with human values across a range of different discursive domains. We conclude by discussing the practical implications of our proposal for the design of conversational agents that are aligned with these norms and values. PubDate: 2023-04-19
- “Walls” of Wax: Reply to Hoły-Łuczaj's Commentary, The “Other”
Measure—the “Other” Technology' Heidegger and Far East Traditions-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: A piece of wax—typically of a spherical shape—has been evoked occasionally as an apt example of how our engagement with the commonest everyday object may constitute a “raw” yet unexpectedly rich (and taxing) experience, from the Aristotelian discourse of Περὶ Ψυχῆς (On the Soul) to the ancient Chinese historical treatises, where the technique of making lajuan (wax-embraced silk) became a practical metaphor for the low-key transmission of classified information. Using the semi-enclosed, “walled” space—specifically, made of the material of wax from Descartes’ often misread Meditationes—as a metaphor of an unlikely shape or structure for inducing the presence of the absent (e.g., memories of personal lives or ancient technology), the paper responds to Hoły-Łuczaj (Philosophy & Technology 36:19, 2023)’s favorable and subtle touches in the Commentary on the recent study of ancient Eastern measurement and measuring thinking (Philosophy & Technology 36:22, 2023; Philosophy & Technology 34(4):785–809, 2021) by relating to the childhood experience of living with measuring tools and measuring/measured person(s) (homō -mētiēns/-mēnsus; to be distinguished from the Pythagorean formula of homo-mensura or man as the measure [of everything]). PubDate: 2023-04-18
- Harm to Nonhuman Animals from AI: a Systematic Account and Framework
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper provides a systematic account of how artificial intelligence (AI) technologies could harm nonhuman animals and explains why animal harms, often neglected in AI ethics, should be better recognised. After giving reasons for caring about animals and outlining the nature of animal harm, interests, and wellbeing, the paper develops a comprehensive ‘harms framework’ which draws on scientist David Fraser’s influential mapping of human activities that impact on sentient animals. The harms framework is fleshed out with examples inspired by both scholarly literature and media reports. This systematic account and framework should help inform ethical analyses of AI’s impact on animals and serve as a comprehensive and clear basis for the development and regulation of AI technologies to prevent and mitigate harm to nonhumans. PubDate: 2023-04-06
- Expressive Avatars: Vitality in Virtual Worlds
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Critics have argued that human-controlled avatar interactions fail to facilitate the kinds of expressivity and social understanding afforded by our physical bodies. We identify three claims meant to justify the supposed expressive limits of avatar interactions compared to our physical interactions. First, “The Limited Expressivity Claim”: avatars have a more limited expressive range than our physical bodies. Second, “The Inputted Expressivity Claim”: any expressive avatarial behaviour must be deliberately inputted by the user. Third, “The Decoding Claim”: users must infer or figure out the expressive meaning of human-controlled avatars’ behaviour through cognitively onerous processes. With the aim of critically assessing all three claims, we analyze data collected through observations of and interviews with expert players of the avatar-based video game League of Legends. Focusing on Daniel Stern’s (2010) notion of vitality, we analyze the participants’ descriptions of seeing and interacting with other avatars during performance. Our analysis shows that the informants experience human-based avatarial interactions as qualitatively different than interactions with bots, that the informants see the movements of other players’ avatars as having different expressive styles, and that the informants actively use and manipulate this avatarial expressivity during performance. The results of our analysis, we argue, provide reasons for loosening or resisting the three claims concerning the limits of avatarial expressivity. PubDate: 2023-04-06
- The Normative Challenges of AI in Outer Space: Law, Ethics, and the
Realignment of Terrestrial Standards-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The paper examines the open problems that experts of space law shall increasingly address over the next few years, according to four different sets of legal issues. Such differentiation sheds light on what is old and what is new with today’s troubles of space law, e.g., the privatization of space, vis-à-vis the challenges that AI raises in this field. Some AI challenges depend on its unique features, e.g., autonomy and opacity, and how they affect pillars of the law, whether on Earth or in space missions. The paper insists on a further class of legal issues that AI systems raise, however, only in outer space. We shall never overlook the constraints of a hazardous and hostile environment, such as on a mission between Mars and the Moon. The aim of this paper is to illustrate what is still mostly unexplored or in its infancy in this kind of research, namely, the fourfold ways in which the uniqueness of AI and that of outer space impact both ethical and legal standards. Such standards shall provide for thresholds of evaluation according to which courts and legislators evaluate the pros and cons of technology. Our claim is that a new generation of sui generis standards of space law, stricter or more flexible standards for AI systems in outer space, down to the “principle of equality” between human standards and robotic standards, will follow as a result of this twofold uniqueness of AI and of outer space. PubDate: 2023-03-31
- Refining Technopoiesis: Measures and Measuring Thinking in Ancient China
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Most recently, two distinctions—echoing the cross-disciplinary critique of the teleological and “quantitative” approach of human arts and sciences at the expanse of the “qualitative”—have been foregrounded by Amzallag (Philosophy and Technology 34, 785–809, 2021) and Crease (2011), respectively, between the modern understanding of “technology” (as technopraxis) and the “forgotten dimension/phase of technology” (called technopoiesis) and between the ontic and ontological measurement. Pace gently the denotation of technopoiesis as a juvenile phase of technological development and the “ontological measurements” as logical and practical impossibility in the modern, mathematized metroscape, the paper reexamines the relevancy of the distinctions (ontic/ontological and po[i]etic/practical, both recalling Heidegger’s “hermeneutical” critique of Husserl’s phenomenology) in non-Platonic/Aristotelian contexts and, in the process, seeks to refine the vital notion of technopoiesis by looking at the intersection of these fuzzy domains. In particular, the ancient Chinese measurements and their understudied onto-poietic dimension in the shifting econ-political contexts may offer an alternative approach to the otherwise elusive presence of technopoiesis and its ontological roots. Arguing that the techno-onto-poiesis does not necessarily belong to the foregone Arcadian past, the paper proposes refined “signals” for recognizing the technopoietic as well as new “forms” of its presence—“interactive emergence” (the cross-stimulating agonistic interactions between techniques of different “stages”) and “poietic clusters” (poietic ideas and/or implements that survive as “cluster” into the future), calling for future investigation of technical inventiveness (even in modern times) that reveal the process of how technopoietic elements enter the lives of technology through least expected embodiment. PubDate: 2023-03-29
- Exploring Digital Civics: a Framework of Key Concepts to Guide Digital
Civics Initiatives-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Presenting the challenges that face digital age citizens as philosophical, as opposed to technological, this paper explores the underpinning structure of digital civics through an overview of its four foundational pillars and their informing scholarship. Through this framework and the literature that supports it, a set of five key concepts are identified as a useful guide for developing digital civics initiatives, including policy and educational interventions. In closing, this work urges the further exploration of the conceptual underpinnings of digital civics to improve future formulations of digital civics and digital citizenship endeavours. PubDate: 2023-03-27
- Meaning in Life in AI Ethics—Some Trends and Perspectives
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In this paper, we discuss the relation between recent philosophical discussions about meaning in life (from authors like Susan Wolf, Thaddeus Metz, and others) and the ethics of artificial intelligence (AI). Our goal is twofold, namely, to argue that considering the axiological category of meaningfulness can enrich AI ethics, on the one hand, and to portray and evaluate the small, but growing literature that already exists on the relation between meaning in life and AI ethics, on the other hand. We start out our review by clarifying the basic assumptions of the meaning in life discourse and how it understands the term ‘meaningfulness’. After that, we offer five general arguments for relating philosophical questions about meaning in life to questions about the role of AI in human life. For example, we formulate a worry about a possible meaningfulness gap related to AI on analogy with the idea of responsibility gaps created by AI, a prominent topic within the AI ethics literature. We then consider three specific types of contributions that have been made in the AI ethics literature so far: contributions related to self-development, the future of work, and relationships. As we discuss those three topics, we highlight what has already been done, but we also point out gaps in the existing literature. We end with an outlook regarding where we think the discussion of this topic should go next. PubDate: 2023-03-25
|