Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper reconsiders what implications quantum decoherence has for Karen Barad’s agential realism. In contrast with the recent claim of Thomas Everth and Laura Gurney (2022), this paper argues that decoherence supports rather than defeats the holist, relational ontology of agential realism. Indeed, decoherence offers an explanation for how a quantum system can remain entangled and superposed in principle while it nevertheless in practice appears classical to a local observer. Decoherence shows why the appearance of classicality is not an objection to the ontology being in reality that of agential realism, in accordance with Barad’s repeated insistence that we should not mistake principle for practice. Whether users of agential realism in social theory should be encouraged by this is another matter that this paper does not take a stance on. As an ontology, however, agential realism is vindicated. PubDate: 2023-05-25
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Symmetry-based inferences have permeated many discussions in philosophy of physics and metaphysics of science. It is claimed that symmetries in our physical theories would allow us to draw metaphysical conclusions about the world, a view that I call ‘symmetry inferentialism’. This paper is critical to this view. I claim that (a) it assumes a philosophically questionable characterization of the relevant validity domain of physical symmetries, and (b) it overlooks a distinction between two opposing ways through which relevant physical symmetries become established. My conclusion is that symmetry inferentialism loses persuasive force when these two points are taken into consideration. PubDate: 2023-05-15
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper aims to develop an account of the pursuitworthiness of models based on a view of models as epistemic tools. This paper is motivated by the historical question of why, in the 1960s, when many scientists hardly found QSAR models attractive, some pharmaceutical scientists pursued Quantitative Structure–Activity Relationship (QSAR) models despite the lack of potential for theoretical development or empirical success. This paper addresses this question by focusing on how models perform their heuristic functions as epistemic tools rather than as potential theories. I argue that models perform their heuristic function by “constructing” phenomena from data in the sense that they allow the model users who interact with the medium of the models to recognise the phenomena as such. The constructed phenomena assist model users in identifying which conditional hypotheses that are focused on low-level regularities concerning entities such as chemical compounds are more “testworthy,” a concept that links the costs associated with hypothesis testing with the fertility of the hypothesis. PubDate: 2023-05-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: I investigate the intrinsic/extrinsic nature of stemness in muscle stem cells (MSC) by relying on recent research on quiescence, with the aim of shedding light on the nature of dispositions and deriving some consequences about stem cells. First, I argue why the study of quiescence is the best available way to establish any claim about the intrinsicness/extrinsicness of stemness at least is some stem cells. Drawing on that, I argue that MSC’s stem capacities result from the combination of intrinsic cues plus extrinsic factors from the stem cell niche, making stemness an extrinsic disposition in MSC. Importantly, it is shown that the niche allows the instantiation of stemness in MSC by acting as a masker of its manifestation. This shows that stemness is, at least in MSC, what I call an extrinsically structurally masked disposition (SMD); that is, a disposition whose instantiation requires, as a condition of possibility, the interaction between the bearer and a masker. Finally, I conclude by suggesting some potential consequences of this observation for the philosophical study of dispositions and for stem cell research. PubDate: 2023-04-26
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: I examine error diagnosis (model-model disagreement) in climate model intercomparisons including its difficulties, fruitful examples, and prospects for streamlining error diagnosis. I suggest that features of climate model intercomparisons pose a more significant challenge for error diagnosis than do features of individual model construction and complexity. Such features of intercomparisons include, e.g., the number of models involved, how models from different institutions interrelate, and what scientists know about each model. By considering numerous examples in the climate modeling literature, I distill general strategies (e.g., employing physical reasoning and using dimension reduction techniques) used to diagnose model error. Based on these examples, I argue that an error repertoire could be beneficial for improving error diagnosis in climate modeling, although constructing one faces several difficulties. Finally, I suggest that the practice of error diagnosis demonstrates that scientists have a tacit-yet-working understanding of their models which has been under-appreciated by some philosophers. PubDate: 2023-03-29
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Until recently Jeffrey Bub and Itamar Pitowsky, in the framework of an information-theoretic view of quantum mechanics, claimed first that to the measurement problem in its ordinary formulation there correspond in effect two measurement problems (simply called the big and the small measurement problems), with a different degree of relevance and, second, that the analysis of a quantum measurement is a problem only if other assumptions – taken by Pitowsky and Bub to be unnecessary ‘dogmas’ – are assumed. Here I critically discuss this unconventional stance on the measurement problem and argue that the Bub-Pitowsky arguments are inconclusive, mainly because they rely on an unwarranted extension to the quantum realm of a distinction concerning the foundations of special relativity which is in itself rather controversial. PubDate: 2023-03-29
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Extrapolating causal effects from experiments to novel populations is a common practice in evidence-based-policy, development economics and other social science areas. Drawing on experimental evidence of policy effectiveness, analysts aim to predict the effects of policies in new populations, which might differ importantly from experimental populations. Existing approaches made progress in articulating the sorts of similarities one needs to assume to enable such inferences. It is also recognized, however, that many of these assumptions will remain surrounded by significant uncertainty in practice. Unfortunately, the existing literature says little on how analysts may articulate and manage these uncertainties. This paper aims to make progress on these issues. First, it considers several existing ideas that bear on issues of uncertainty, elaborates the challenges they face, and extracts some useful rationales. Second, it outlines a novel approach, called the support graph approach, that builds on these rationales and allows analysts to articulate and manage uncertainty in extrapolation in a systematic and unified way. PubDate: 2023-03-22 DOI: 10.1007/s13194-023-00520-1
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Climate scientists have proposed two methods to link extreme weather events and anthropogenic climate forcing: the probabilistic and the storyline approach. Proponents of the first approach have raised the criticism that the storyline approach could be overstating the role of anthropogenic climate change. This issue has important implications because, in certain contexts, decision-makers might seek to avoid information that overstates the effects of anthropogenic climate change. In this paper, we explore two research questions. First, whether and to what extent the storyline approach overstates the effects of anthropogenic climate change. Second, whether the objections offered against the storyline approach constitute good reasons to prefer the probabilistic approach. Concerning the first question, we show that the storyline approach does not necessarily overstate the effects of climate change, and particularly not for the reasons offered by proponents of the probabilistic approach. Concerning the second question, we show, independently, that the probabilistic approach faces the same or very similar objections to those raised against the storyline approach due to the lack of robustness of climate models and the way events are commonly defined when applying the probabilistic approach. These results suggest that these objections might not constitute good reasons to prefer the probabilistic approach over the storyline approach. PubDate: 2023-03-11 DOI: 10.1007/s13194-023-00516-x
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Side effects are ubiquitous in medicine and they often play a role in treatment decisions for patients and clinicians alike. Philosophers and health researchers often use side effects to illustrate issues with contemporary medical research and practice. However, technical definitions of ‘side effect’ differ among health authorities. Thus, determining the side effects of an intervention can differ depending on whose definition we assume. Here I review some of the common definitions of side effect and highlight their issues. In response, I offer an account of side effects as jointly (i) unintended and (ii) effects due to the causal capacities or invariances of an intervention. I discuss (i) by examining the intentions or reasons behind therapeutic interventions, and I discuss (ii) by appealing to a manipulationist model of causation. The analysis here highlights that side effects are conceptually distinct from related outcomes like adverse events, adverse drug reactions, and placebo effects. The analysis also allows for reflection on the utility of ‘side effect’ as a technical term in medical research and practice. PubDate: 2023-03-11 DOI: 10.1007/s13194-023-00519-8
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Recent work in the philosophy of scientific concepts has seen the simultaneous revival of operationalism and development of patchwork approaches to scientific concepts. We argue that these two approaches are natural allies. Both recognize an important role for measurement techniques in giving meaning to scientific terms. The association of multiple techniques with a single term, however, raises the threat of proliferating concepts (Hempel, 1966). While contemporary operationalists have developed some resources to address this challenge, these resources are inadequate to account for the full range of complex behaviors of scientific concepts. We adopt show how the patchwork approach’s repertoire of inter-patch relations can expand the resources available to the operationalist. We focus on one especially important type of inter-patch relation: sharing a general reasoning strategy. General reasoning strategies serve two important functions: (1) they bind together distinct patches of scientific concepts, and (2) they provide normative guidance for extending concepts to new domains. PubDate: 2023-03-07 DOI: 10.1007/s13194-023-00515-y
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In this paper I shall defend the idea that there is an abstract and general core meaning of objectivity, and what is seen as a variety of concepts or conceptions of objectivity are in fact criteria of, or means to achieve, objectivity. I shall then discuss the ideal of value-free science and its relation to the objectivity of science; its status can be at best a criterion of, or means for, objectivity. Given this analysis, we can then turn to the problem of inductive risk. Do the value judgements regarding inductive risk really pose a threat to the objectivity of science' I claim that this is not the case because they do not lower the thresholds scientifically postulated for objectivity. I shall conclude the paper with a discussion of under-appreciated influences of values on science, which indeed pose a serious threat to the objectivity of some scientific disciplines. PubDate: 2023-03-07 DOI: 10.1007/s13194-023-00518-9
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper develops an approach to the scientific realism debate that has three main features. First, our approach admits multiple criteria of reality, i.e., criteria that, if satisfied, warrant belief in the reality of hypothetical entities. Second, our approach is experiment-based in the sense that it focuses on criteria that are satisfied by experiments as opposed to theories. Third, our approach is local in the sense that it focuses on the reality of particular kinds of entities. We apply this approach to a case that many philosophers have debated, namely, Jean Perrin’s work on atoms and molecules. We provide a novel account by arguing that Perrin’s work warranted a minimal belief in the reality of atoms and molecules as unobservable, discrete particles by satisfying a criterion of reality that we call experimental determination of number per unit. By doing so, he confirmed Avogadro’s hypothesis, but he did not confirm other key constituents of the atomic theories involved. We argue that our account of Perrin’s work is preferable to several other accounts, and we use this as a reason in support of our approach to the realism debate more generally. PubDate: 2023-03-01 DOI: 10.1007/s13194-023-00517-w
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper aims to distinguish two main types of coarse graining, and reveal the relationship between the notions of coarse graining and emergence. In physics, some forms of coarse graining seem to be indispensable to show a physical property, and the other merely changes our descriptions of the system. To clarify the notion of coarse graining, this article investigates the cases of the renormalization group method and irreversibility, both of which have been important topics in philosophy of science, and the case of the rigid body in classical mechanics, which is an elementary case including coarse graining. The case studies reveal the distinction between substantial and mere coarse-graining. This distinction clarifies the relationships between the notions of coarse graining and emergence and further provides some implications for the issues about emergence. PubDate: 2023-02-22 DOI: 10.1007/s13194-023-00513-0
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In the context of astrophysical modeling at the solar system scale, we investigate the modalities implied by taking into account different levels of detail at which phenomena can be considered. In particular, by framing the analysis in terms of the how-possibly/how-actually distinction, we address the debated question as to whether the degree of plausibility is tightly linked to the degree of detail. On the grounds of concrete examples, we argue that, also in the astrophysical context examined, this is not necessarily the case. PubDate: 2023-02-21 DOI: 10.1007/s13194-023-00511-2
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Reactivity in qualitative data collection occurs when a researcher generates data about a situation with reactivity, that is, a situation in which the ongoing research affects the research participants such that they, say, diverge from their routines when the researcher is present, or tell the researcher what they think she wants to hear. In qualitative research, there are two basic approaches to reactivity. The traditional position maintains that data should ideally be collected in situations without any reactivity. In other words, good data are reactivity free. By contrast, the more recent view holds that data from situations with reactivity are fine as long as the researcher is aware of the occurring reactivity so that she can take it into account when interpreting her data. In this fashion, good data are reactivity transparent. In this paper, I first spell out and defend the more recent approach to reactivity. I argue that qualitative data are reactivity transparent when conjoined with true reactivity assumptions and that, thus supplemented, data are informative about social life independently of its being studied. Next, I examine various issues raised by the requirement to put forth true reactivity assumptions. Lastly, I use my discussion of reactivity transparency as a basis for providing a framework for thinking about good qualitative data. PubDate: 2023-02-21 DOI: 10.1007/s13194-023-00514-z
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: There has been a lively debate in the philosophy of science over predictivism: the thesis that successfully predicting a given body of data provides stronger evidence for a theory than merely accommodating the same body of data. I argue for a very strong version of the thesis using statistical results on the so-called “model selection” problem. This is the problem of finding the optimal model (family of hypotheses) given a body of data. The key idea that I will borrow from the statistical literature is that the level of support a hypothesis, H, receives from a body of data, D, is inversely related to the number of adjustable parameters of the model from which H was constructed. I will argue that when D is not essential to the design of H (i.e., when it is predicted), the model to which H belongs has fewer adjustable parameters than when D is essential to the design of H (when it is accommodated). This, I argue, provides us with an argument for a very strong version of predictivism. PubDate: 2023-02-21 DOI: 10.1007/s13194-023-00512-1
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In this paper, I argue that analyzing pursuitworthiness in science requires that we study possible futures of science. The merits of different criteria of pursuitworthiness need to be assessed against scenarios of the future of science. Different criteria recognize and ignore different scenarios. As a consequence, different criteria enable us to manage different future possibilities. While it might be impossible to predict the future of science, there are still many interesting things we can say about the possible futures of science. We can construct scenarios of the future of science on the basis of philosophical accounts of science. I point out that the topic of pursuitworthiness is not the only topic that can be approached by connecting philosophy of science with the construction of scenarios. PubDate: 2023-01-21 DOI: 10.1007/s13194-023-00510-3
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The notion of natural kinds has been widely criticized in philosophy of science but also appears indispensable for philosophical engagement with classificatory practices. Rather than addressing this tension through a new definition of “natural kind”, this article suggests materiality as a substitute for naturalness in philosophical debates about scientific classification. It is argued that a theory of material kinds provides an alternative and more inclusive entry point for analyzing classificatory practices, which is specified through an account of “restricted malleability” of materiality and further analyzed as (1) gradual, (2) multi-dimensional, (3) scalable, (4) interactive, and (5) purpose sensitive. PubDate: 2023-01-21 DOI: 10.1007/s13194-023-00509-w
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Philosophers of science typically focus on the epistemic performance of scientific models when evaluating them. Analysing the effects that models may have on the world has typically been the purview of sociologists of science. We argue that the reactive (or “performative”) effects of models should also figure in model evaluations by philosophers of science. We provide a detailed analysis of how models in financial economics created the impetus for the growing importance of the phenomenon of “passive investing” in financial markets. Considering this case motivates the position that we call contextualism about model evaluation, or model contextualism for short. Model contextualism encompasses standard analyses of the epistemic performance of the model, but also includes their reactive aspects. It entails identifying the epistemic and contextual import of the model, the ways in which a model can engender change in the world (which we call the channels of transmission), and the interactions between the epistemic and reactive import of a model. PubDate: 2023-01-21 DOI: 10.1007/s13194-022-00506-5
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The main purpose of this paper is to refute the ‘methodological continuity’ argument supporting epistemic realism in metaphysics. This argument aims to show that scientific realists have to accept that metaphysics is as rationally justified as science given that they both employ inference to the best explanation, i.e. that metaphysics and science are methodologically continuous. I argue that the reasons given by scientific realists as to why inference to the best explanation (IBE) is reliable in science do not constitute a reason to believe that it is reliable in metaphysics. The justification of IBE in science and the justification of IBE in metaphysics are two distinct issues with only superficial similarities, and one cannot rely on one for the other. This becomes especially clear when one analyses the debate about the legitimacy of IBE that has taken place between realists and empiricists. The metaphysician seeking to piggyback on the realist defense of IBE in science by the methodological continuity argument presupposes that the defense is straightforwardly applicable to metaphysics. I will argue that it is, in fact, not. The favored defenses of IBE by scientific realists make extensive use of empirical considerations, predictive power and inductive evidence, all of which are paradigmatically absent in the metaphysical context. Furthermore, even if the realist would concede the methodological continuity argument, I argue that the metaphysician fails to offer any agreed upon conclusions resulting from its application in metaphysics. PubDate: 2023-01-11 DOI: 10.1007/s13194-022-00505-6