Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: David Wallace has argued that there is no special problem for free will in the many-worlds interpretation of quantum mechanics, beyond the well-known problem of reconciling free will with physical determinism. I argue to the contrary that, on the plausible and popular “deep self” approach to compatibilism, the many-worlds interpretation does face a special problem. It is not clear on the many-worlds picture how our actions can issue from our most central character traits, given that copies of us in other branches are certain to act differently than we do. PubDate: 2025-04-12
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The Neurodiversity (ND) movement demands that some psychiatric categories be de-pathologized. It has faced much criticism, leading some to despair whether it can ever be brought together with psychiatry. In this paper, we argue for a particular understanding of this central demand of the ND movement. We argue that the demand for de-pathologizing is the rejection of (paradigmatically) autism as a hypostatic abstraction; the ND movement is committed, first and foremost, to the reconceptualization of autism not as something one has, but as something one is. We distinguish between two senses of autistic identity —one pre-reflective, and one social and political— operative in this reconceptualization. This understanding of the ND movement is centrally about a rethinking of the relation between a subject and a psychiatric label. It is not about reconceptualizing psychiatric categories in terms of advantageous variations, as we believe critics fear. Our understanding of what the ND movement is asking for has the noteworthy consequence that many of the most influential criticisms of the ND movement are missing the mark and worries about the impossibility of reconciling the movement with psychiatry are unwarranted. PubDate: 2025-04-10
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: A consensus is emerging in the philosophy of science that value judgements are ineliminable from scientific inquiry. Which values should then be chosen by scientists' This paper proposes a novel answer to this question, labelled the public reason view. To place this answer on firm ground, I first redraw the boundaries of the political forum; in other words, I broaden the range of actors who have a moral duty to follow public reason. Specifically, I argue that scientific advisors to policy makers have that duty—a duty that is needed to create a barrier against any nonpublic values that scientific researchers might let enter their work. Next, I specify how scientific advisors should approach value judgements to satisfy public reason, arguing that they should work within a conception of justice that is political and reasonable in several distinct senses. Scientific researchers at large should instead communicate their value judgements by following norms of transparency that facilitate scientific advisors’ public reasoning. Finally, I contrast my account with the dominant response to the which-values question, which focuses instead on citizens’ values, demonstrating that that response shares several problematic features with the heavily criticised external conception of public reason. PubDate: 2025-04-07
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Many people believe that it is better to extend the length of a happy life than to create a new happy life, even if the total welfare is the same in both cases. Despite the popularity of this view, one would be hard-pressed to find a fully compelling justification for it in the literature. This paper develops a novel account of why and when extension is better than replacement that applies not just to persons but also to non-human animals and humanity as a whole. PubDate: 2025-04-04
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This is a contribution to a book symposium on Bias: A Philosophical Study, in which I respond to commentaries by Gabbrielle Johnson, Daniel Greco, and Selim Berker. In response to Johnson, I argue that many paradigmatic cases of bias are not best understood as involving underdetermination, and I defend my alternative account of bias against the concerns that she raises. In response to Greco, I note some of the ways in which the credibility of my claims depends on further empirical research, and I clarify my claims about introspection in order to show that they are consistent with the possibilities that he raises. In response to Berker, I offer a view about the metaphysical status of “non-pejorative” biases while resisting his suggestion that all non-evaluative uses of the term “bias” are misuses of the term. I defend my proposal that our knowledge that skeptical hypotheses are false is a case of “biased knowing” against a dilemma that he raises for that possibility. PubDate: 2025-04-04
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: There has been widespread opposition to so-called essentialism in contemporary social theory. At the same time, within contemporary analytic metaphysics, the notion of essence has been revived and put to work by neo-Aristotelians. The ‘new essentialism’ of the neo-Aristotelians opens the prospect for a new social essentialism—one that avoids the problematic commitments of the ‘old essentialism’ while also providing a helpful framework for social theorizing. In this paper, I develop a neo-Aristotelian brand of essentialism about social kinds and show how it avoids the legitimate worries of social theorists. I then argue that neo-Aristotelian social kind essentialism provides a helpful framework for a wide range of projects in social ontology and feminist metaphysics, including debunking projects, descriptive inquiries, and the project of achieving social change. I further argue that an essentialist framework is more useful than a grounding framework when it comes to certain legitimate theoretical and practical purposes in social theory. PubDate: 2025-04-02
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Among theories of vagueness, supervaluationism stands out for its non–truth functional account of the logical connectives. For example, the disjunction of two atomic statements that are not determinately true or false can, itself, come out either true or indeterminate, depending on its content—a consequence several philosophers find problematic. Smith (2016) turns this point against supervaluationism most pressingly, arguing that truth functionality is essential to any adequate model of truth. But this conclusion is too strong. Here, I argue that the problem with standard forms of supervaluationism is not the failure of truth functionality per se, but rather that they lack the structural resources necessary to algorithmically assign truth values to sentences based on their respective subject matters. However, recent developments of supervaluationism, which draw upon the cognitive science framework of conceptual spaces, resolve this issue. By incorporating conceptual information directly into their model-theoretic representations of the subject matters of sentences, these newer frameworks retain sensitivity to conceptual relations while providing consistent, content-based valuations of truth. Hence, their lack of truth functionality is nothing to worry about. PubDate: 2025-03-30
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The normative challenge of AI alignment centres upon what goals or values ought to be encoded in AI systems to govern their behaviour. A number of answers have been proposed, including the notion that AI must be aligned with human intentions or that it should aim to be helpful, honest and harmless. Nonetheless, both accounts suffer from critical weaknesses. On the one hand, they are incomplete: neither specification provides adequate guidance to AI systems, deployed across various domains with multiple parties. On the other hand, the justification for these approaches is questionable and, we argue, of the wrong kind. More specifically, neither approach takes seriously the need to justify the operation of AI systems to those affected by their actions – or what this means for pluralistic societies where people have different underlying beliefs about value. To address these limitations, we propose an alternative account of AI alignment that focuses on fair processes. We argue that principles that are the product of these processes are the appropriate target for alignment. This approach can meet the necessary standard of public justification, generate a fuller set of principles for AI that are sensitive to variation in context, and has explanatory power insofar as it makes sense of our intuitions about AI systems and points to a number of hitherto underappreciated ways in which an AI system may cease to be aligned. PubDate: 2025-03-30
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The conventional discourse on existential risks (x-risks) from AI typically focuses on abrupt, dire events caused by advanced AI systems, particularly those that might achieve or surpass human-level intelligence. These events have severe consequences that either lead to human extinction or irreversibly cripple human civilization to a point beyond recovery. This decisive view, however, often neglects the serious possibility of AI x-risk manifesting gradually through an incremental series of smaller yet interconnected disruptions, crossing critical thresholds over time. This paper contrasts the conventional decisive AI x-risk hypothesis with what I call an accumulative AI x-risk hypothesis. While the former envisions an overt AI takeover pathway, characterized by scenarios like uncontrollable superintelligence, the latter suggests a different pathway to existential catastrophes. This involves a gradual accumulation of AI-induced threats such as severe vulnerabilities and systemic erosion of critical economic and political structures. The accumulative hypothesis suggests a boiling frog scenario where incremental AI risks slowly undermine systemic and societal resilience until a triggering event results in irreversible collapse. Through complex systems analysis, this paper examines the distinct assumptions differentiating these two hypotheses. It is then argued that the accumulative view can reconcile seemingly incompatible perspectives on AI risks. The implications of differentiating between the two types of pathway—the decisive and the accumulative—for the governance of AI as well as long-term AI safety are discussed. PubDate: 2025-03-30
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Teitel (Mind 128:39-68, 2019) argues that the following three doctrines are jointly inconsistent: i) the doctrine that metaphysical necessity reduces to essence; ii) the doctrine that possibly something could fail to exist; and iii) the doctrine that metaphysical necessity obeys a modal logic of at least S4. This paper presents a novel solution to Teitel’s puzzle, regimented in a higher-order logical setting, which is crucially based on the idea that the putative reduction of metaphysical necessity to essence should be understood through appealing to some hyperintensional notion—such as grounding or real definition—rather than the notion of identity/identification. Moreover, it will also be shown that the proposed reductive account has a significant advantage over its rival account. PubDate: 2025-03-25
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: If the legitimate exercise of political power requires justifiability to all citizens, as John Rawls’s influential Liberal Principle of Legitimacy states, then what should we say about the legitimacy of institutions and actions that have a significant impact on the interests of future citizens' Surprisingly, this question has been neglected in the literature. This paper questions the assumption that it is only justifiability to presently existing citizens that matters, and provides reasons for thinking that legitimacy requires justifiability to future citizens as well. Further, it is argued that the presently dominant interpretation of Rawls’s principle is unable to take future citizens into account in an adequate way. Therefore, the inclusion of these citizens among those to whom justifiability is owed gives us good reasons to reject this interpretation, and to adopt a different understanding of the view. PubDate: 2025-03-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper challenges Sosa’s account of the epistemic propriety of suspension of judgment. We take the reader on a test drive through some common problem cases in epistemology and argue that Sosa makes accurate and apt suspension both too easy and too hard. PubDate: 2025-03-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper examines cases in which an individual’s misunderstanding improves the scientific community’s understanding through “corrective” processes that produce understanding from poor epistemic inputs. To highlight the unique features of valuable misunderstandings and corrective processes, we contrast them with other social-epistemological phenomena including testimonial understanding, collective understanding, Longino’s critical contextual empiricism, and knowledge from falsehoods. PubDate: 2025-03-14
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Russell (Proc Aristot Soc 11:108–128, 1911; The Problems of Philosophy, Thornton Butterworth Limited, London, 1912) argued that perceptual experience grounds a species of non-propositional knowledge, “knowledge by acquaintance,” and in recent years, this account of knowledge has been gaining traction. I defend on its basis a connection between moral and epistemic failure. I argue, first, that insufficient concern for the suffering of others can be explained in terms of an agent’s lack of acquaintance knowledge of another’s suffering, and second, that empathy improves our epistemic situation. Empathic distress approximates acquaintance with another’s suffering, and empathic agents who are motivated to help rather than disengage exhibit an important epistemic virtue: a variety of intellectual courage. A key upshot is that an independently motivated account of the structure and significance of perceptual experience is shown to provide theoretical scaffolding for understanding a famously elusive idea in ethics—namely, that the failure to help others stems from a kind of ignorance of their situation. PubDate: 2025-03-14
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: AI technologies have shown remarkable capabilities in various scientific fields, such as drug discovery, medicine, climate modeling, and archaeology, primarily through their pattern recognition abilities. They can also generate hypotheses and suggest new research directions. While acknowledging AI’s potential to aid in scientific breakthroughs, the paper shows that current AI models do not meet the criteria for making independent scientific discoveries. Discovery is seen as an epistemic achievement that requires a level of competence and self-awareness that AI does not yet possess. PubDate: 2025-03-14
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper presents an argument that certain AI safety measures, rather thanmitigating existential risk, may instead exacerbate it. Under certain key assumptions -the inevitability of AI failure, the expected correlation between an AI system's power atthe point of failure and the severity of the resulting harm, and the tendency of safetymeasures to enable AI systems to become more powerful before failing - safety effortshave negative expected utility. The paper examines three response strategies:Optimism, Mitigation, and Holism. Each faces challenges stemming from intrinsicfeatures of the AI safety landscape that we term Bottlenecking, the Perfection Barrier,and Equilibrium Fluctuation. The surprising robustness of the argument forces a reexaminationof core assumptions around AI safety and points to several avenues forfurther research. PubDate: 2025-03-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Many philosophers believe that it counts against one morally if one is close and good friends with a bad person. Some argue that one acts badly by counting a bad person as a good friend, because such friendships carry significant moral risks. Others locate the moral badness in one’s moral psychology, suggesting that one becomes objectionably complacent by being good friends with a bad person. In this paper, I argue that none of these accounts are plausible. In fact, I propose that the starting intuition, that there is something pro tanto morally bad in being close and good friends with a bad person, does not track ethical reality. A person’s friend list isn’t at all in-principle informative of a person’s moral character. I also diagnose why we nonetheless have this mistaken intuition. I propose that friendships are fragmented in two crucial aspects. Once we observe these fragmentations, our initially mistaken intuition completely goes away. PubDate: 2025-03-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: I provide an overview of some of the main ideas presented in my book Bias: A Philosophical Study. PubDate: 2025-03-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Hadfield-Menell et al. (2017) propose the Off-Switch Game, a model of Human-AI cooperation in which AI agents always defer to humans because they are uncertain about our preferences. I explain two reasons why AI agents might not defer. First, AI agents might not value learning. Second, even if AI agents value learning, they might not be certain to learn our actual preferences. PubDate: 2025-02-26