Subjects -> PHILOSOPHY (Total: 762 journals)
| A B C D E F G H I J K L M N O P Q R S T U V W X Y Z | The end of the list has been reached or no journals were found for your choice. |
|
|
- Editorial
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Pages: 1155 - 1156 Abstract: Back in the day, when I was working on my PhD in HPS at Chelsea College, I shared an office with Don Gillies during his time as Editor of the journal. Those were the days of hard copy submissions that would arrive by post and pile up on Don’s desk in brown envelopes, to then be copied again and mailed out to referees. Don would occasionally ask me to look over the physics-based papers, just to check whether they were kosher or crazy, and I had an up-close look at what editing a journal actually involved. That didn’t put me off from applying to be Co-Editor-in-Chief with Michela Massimi ten years ago, but the ‘shape’ of the job had clearly changed by then. First of all, Oxford University Press had recently installed ScholarOne, the electronic ‘workflow management system’, so no more dodgy photocopies in brown envelopes. Everything now, from submission to selection of referees to the final decision, was to be handled online. Fortunately, we managed to hire Beth Hannon, who not only had a HPS background but had also worked for OUP, and she has done an absolutely outstanding job in managing the journal on a day-to-day basis, wrestling with The System and keeping everything on track, including the other editors. More than that, she has also expanded the reach of the journal, not only through the ubiquitous social media forums we’ve collaborated on, but also by means of new developments such as our blog, ‘Auxiliary Hypotheses’, our separate ‘BJPS Reviews of Books’ webpage and, recently, ‘Short Reads’, where authors summarize their soon to be published papers. PubDate: Mon, 17 Aug 2020 00:00:00 GMT DOI: 10.1093/bjps/axaa010 Issue No: Vol. 71, No. 4 (2020)
- Understanding Cultural Fidelity
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Charbonneau M. Pages: 1209 - 1233 Abstract: AbstractA leading idea of cultural evolutionary theory is that for human cultures to undergo evolutionary change, cultural transmission must generally serve as a high-fidelity copying process. In analogy to genetic inheritance, the high fidelity of human cultural transmission would act as a safeguard against the transformation and loss of cultural information, thus ensuring both the stability and longevity of cultural traditions. Cultural fidelity would also serve as the key difference-maker between human cumulative cultures and non-human non-cumulative traditions, explaining why only us humans, with our uniquely high-fidelity transmission capabilities, are capable of evolving and sustaining complex traditions. But what does it mean for cultural transmission to be more or less faithful' This article has two objectives. The first is to clarify the meaning and uses of the concept of cultural fidelity and to evaluate their explanatory import. I argue that cultural evolutionists use several fidelity concepts (episodic, propensity, and generalized fidelity), concepts that I will define and clarify. The second objective is to challenge the explanatory significance of a general notion of fidelity (generalized fidelity) as being both explanatorily meaningful and operationalizable. I conclude that if fidelity is to serve as an explanation of the key differences between human cumulative cultures and non-human non-cumulative traditions, then the concept will have to be redesigned and rely on different assumptions. 1Introduction2Generalized Fidelity2.1The need for an explanatory concept of fidelity2.2Three conditions for a general notion of cultural fidelity3Episodic Fidelity 3.1Measuring episodic fidelity3.2The relevance problem3.3Explanatory pluralism3.4The incommensurability of fidelity metrics4Propensity Fidelity5Fidelity as an Explanatory Concept 5.1Explaining cultural stability5.2Explaining human distinctiveness6Conclusion PubDate: Sun, 14 Apr 2019 00:00:00 GMT DOI: 10.1093/bjps/axy052 Issue No: Vol. 71, No. 4 (2019)
- Representing the World with Inconsistent Mathematics
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: McCullough-Benner C. Pages: 1331 - 1358 Abstract: AbstractAccording to standard accounts of mathematical representations of physical phenomena, positing structure-preserving mappings between a physical target system and the structure(s) picked out by a mathematical theory is essential to such representations. In this paper, I argue that these accounts fail to give a satisfactory explanation of scientific representations that make use of inconsistent mathematical theories and present an alternative, robustly inferential account of mathematical representation that provides not just a better explanation of applications of inconsistent mathematics, but also a compelling explanation of mathematical representations of physical phenomena in general. 1Inconsistent Mathematics and the Problem of Representation2The Early Calculus3Mapping Accounts and the Early Calculus3.1Partial structures3.2Inconsistent structures3.3Related total consistent structures4A Robustly Inferential Account of the Early Calculus in Applications 4.1The robustly inferential conception of mathematical representation4.2The robustly inferential conception and inconsistent mathematics4.3The robustly inferential conception and mapping accounts5Beyond Inconsistent Mathematics PubDate: Sat, 24 Aug 2019 00:00:00 GMT DOI: 10.1093/bjps/axz001 Issue No: Vol. 71, No. 4 (2019)
- Robustness and Idealizations in Agent-Based Models of Scientific
Interaction-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Frey D; Šešelja D. Pages: 1411 - 1437 Abstract: AbstractThe article presents an agent-based model (ABM) of scientific interaction aimed at examining how different degrees of connectedness of scientists impact their efficiency in knowledge acquisition. The model is built on the basis of Zollman’s ([2010]) ABM by changing some of its idealizing assumptions that concern the representation of the central notions underlying the model: epistemic success of the rivalling scientific theories, scientific interaction and the assessment in view of which scientists choose theories to work on. Our results suggest that whether and to what extent the degree of connectedness of a scientific community impacts its efficiency is a highly context-dependent matter since different conditions deem strikingly different results. More generally, we argue that simplicity of ABMs may come at a price: the requirement to run extensive robustness analysis before we can specify the adequate target phenomenon of the model.11 1Introduction2Zollman's 2010 Model3Static versus Dynamic Epistemic Success 3.1Introducing the notion of dynamic epistemic success3.2Implementation and results for the basic setup4Critical Interaction 4.1Introducing critique4.2Implementation and results5Inertia of Inquiry 5.1Introducing rational inertia5.2Implementation and results6Threshold Below Which Theories Are Equally Promising 6.1An inquiry that is even more difficult6.2Implementation and results7Discussion8Conclusion PubDate: Tue, 09 Jul 2019 00:00:00 GMT DOI: 10.1093/bjps/axy039 Issue No: Vol. 71, No. 4 (2019)
- How to Beat Science and Influence People: Policymakers and Propaganda in
Epistemic Networks-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Weatherall J; O’Connor C, Bruner J. Pages: 1157 - 1186 Abstract: AbstractIn their recent book, Oreskes and Conway ([2010]) describe the ‘tobacco strategy’, which was used by the tobacco industry to influence policymakers regarding the health risks of tobacco products. The strategy involved two parts, consisting of (i) promoting and sharing independent research supporting the industry’s preferred position and (ii) funding additional research, but selectively publishing the results. We introduce a model of the tobacco strategy, and use it to argue that both prongs of the strategy can be extremely effective—even when policymakers rationally update on all evidence available to them. As we elaborate, this model helps illustrate the conditions under which the tobacco strategy is particularly successful. In addition, we show how journalists engaged in ‘fair’ reporting can inadvertently mimic the effects of industry on public belief. 1Introduction2Epistemic Network Models3Selective Sharing4Biased Production5Journalists as Unwitting Propagandists6ConclusionAppendix PubDate: Thu, 23 Aug 2018 00:00:00 GMT DOI: 10.1093/bjps/axy062 Issue No: Vol. 71, No. 4 (2018)
- Defending a Risk Account of Scientific Objectivity
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Koskinen I. Pages: 1187 - 1207 Abstract: AbstractWhen discussing scientific objectivity, many philosophers of science have recently focused on accounts that can be applied in practice when assessing the objectivity of something. It has become clear that in different contexts, objectivity is realized in different ways, and the many senses of objectivity recognized in the recent literature seem to be conceptually distinct. I argue that these diverse ‘applicable’ senses of scientific objectivity have more in common than has thus far been recognized. I combine arguments from philosophical discussions of trust, from negative accounts of objectivity, and from the recent literature on epistemic risks. When we call X objective, we endorse it: we say that we rely on X, and that others should do so too. But the word ‘objective’ is reserved for a specific type of reliance: it is based on the belief that important epistemic risks arising from our imperfections as epistemic agents have been effectively averted. All the positive senses of objectivity identify either some risk of this type, or some efficient strategy for averting one or more such risks. 1Introduction2Applicable Notions of Objectivity3Reliance Instead of Trust4Epistemic Risks Arising from Our Imperfections as Epistemic Agents Rather Than Subjectivity5The Risk Account of Scientific Objectivity6Is This Useful'7Conclusion PubDate: Fri, 03 Aug 2018 00:00:00 GMT DOI: 10.1093/bjps/axy053 Issue No: Vol. 71, No. 4 (2018)
- Whence the Effectiveness of Effective Field Theories'
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Franklin A. Pages: 1235 - 1259 Abstract: AbstractEffective quantum field theories (EFTs) are effective insofar as they apply within a prescribed range of length-scales, but within that range they predict and describe with extremely high accuracy and precision. The effectiveness of EFTs is explained by identifying the features—the scaling behaviour of the parameters—that lead to effectiveness. The explanation relies on distinguishing autonomy with respect to changes in microstates (autonomyms), from autonomy with respect to changes in microlaws (autonomyml), and relating these, respectively, to renormalizability and naturalness. It is claimed that the effectiveness of EFTs is a consequence of each theory’s autonomyms rather than its autonomyml.1Introduction2Renormalizability2.1Explaining renormalizability3Naturalness3.1An unnatural but renormalizable theory4Two Kinds of Autonomy5The Effectiveness of Effective Quantum Field Theories6Conclusion PubDate: Fri, 03 Aug 2018 00:00:00 GMT DOI: 10.1093/bjps/axy050 Issue No: Vol. 71, No. 4 (2018)
- Beyond Explanation: Understanding as Dependency Modelling
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Dellsén F. Pages: 1261 - 1286 Abstract: AbstractThis article presents and argues for an account of objectual understanding that aims to do justice to the full range of cases of scientific understanding, including cases in which one does not have an explanation of the understood phenomenon. According to the proposed account, one understands a phenomenon just in case one grasps a sufficiently accurate and comprehensive model of the ways in which it or its features are situated within a network of dependence relations; one’s degree of understanding is proportional to the comprehensiveness and accuracy of such a model. I compare this account with accounts of scientific understanding that explicate understanding in terms of having an explanation of the understood phenomenon. I discuss three distinct types of cases in which scientific understanding does not amount to possessing an explanation of any kind and argue that the proposed model-based account can accommodate these cases while still retaining a strong link between understanding and explanation. 1Introduction2Objectual Understanding3Understanding as Dependency Modelling4Explanatory Accounts5Explanatory Bruteness6Explanatory Targetedness7Explanatory Disconnectedness8Conclusion PubDate: Thu, 02 Aug 2018 00:00:00 GMT DOI: 10.1093/bjps/axy058 Issue No: Vol. 71, No. 4 (2018)
- Clocks and Chronogeometry: Rotating Spacetimes and the Relativistic Null
Hypothesis-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Menon T; Linnemann N, Read J. Pages: 1287 - 1317 Abstract: AbstractRecent work in the physics literature demonstrates that, in particular classes of rotating spacetimes, physical light rays in general do not traverse null geodesics. Having presented this result, we discuss its philosophical significance, both for the clock hypothesis (and, in particular, a recent purported proof thereof for light clocks), and for the operational meaning of the metric field. 1Introduction2Fletcher's Theorem2.1Maudlin on the clock hypothesis in special relativity2.2Fletcher’s result in special relativity2.3Fletcher’s theorem in general relativity3Electromagnetism and the Geometrical-Optical Limit 3.1Maxwell’s equations in curved spacetime3.2The geometrical-optical limit3.3Rotating spacetimes3.4Aren’t Gödel spacetimes unphysical'4The Clock Hypothesis and Chronogeometry 4.1Natural and mathematical observations4.2Clock registry discord4.3Chronogeometry5Conclusion PubDate: Fri, 03 Aug 2018 00:00:00 GMT DOI: 10.1093/bjps/axy055 Issue No: Vol. 71, No. 4 (2018)
- Unmixing for Causal Inference: Thoughts on McCaffrey and Danks
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhang K; Glymour M. Pages: 1319 - 1330 Abstract: AbstractMcCaffrey and Danks have posed the challenge of discovering causal relations in data drawn from a mixture of distributions as an impossibility result in functional magnetic resonance (fMRI). We give an algorithm that addresses this problem for the distributions commonly assumed in fMRI studies and find that in testing, it can accurately separate data from mixed distributions. As with other obstacles to automated search, the problem of mixed distributions is not an impossible one, but rather a challenge. 1Introduction2Background3Addressing the Problem4Discussion PubDate: Fri, 10 Aug 2018 00:00:00 GMT DOI: 10.1093/bjps/axy040 Issue No: Vol. 71, No. 4 (2018)
- A Verisimilitude Framework for Inductive Inference, with an Application to
Phylogenetics-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Vassend O. Pages: 1359 - 1383 Abstract: AbstractBayesianism and likelihoodism are two of the most important frameworks philosophers of science use to analyse scientific methodology. However, both frameworks face a serious objection: much scientific inquiry takes place in highly idealized frameworks where all the hypotheses are known to be false. Yet, both Bayesianism and likelihoodism seem to be based on the assumption that the goal of scientific inquiry is always truth rather than closeness to the truth. Here, I argue in favour of a verisimilitude framework for inductive inference. In the verisimilitude framework, scientific inquiry is conceived of, in part, as a process where inference methods ought to be calibrated to appropriate measures of closeness to the truth. To illustrate the verisimilitude framework, I offer a reconstruction of parsimony evaluations of scientific theories, and I give a reconstruction and extended analysis of the use of parsimony inference in phylogenetics. By recasting phylogenetic inference in the verisimilitude framework, it becomes possible to both raise and address objections to phylogenetic methods that rely on parsimony. 1Introduction2Problems with the Law of Likelihood3Introducing Verisimilitude-Based Inference4Examples of Verisimilitude-Based Inference Procedures4.1Parsimony inference over theories4.2Parsimony inference in phylogenetics5Conclusion PubDate: Mon, 06 Aug 2018 00:00:00 GMT DOI: 10.1093/bjps/axy054 Issue No: Vol. 71, No. 4 (2018)
- Mechanistic Causation and Constraints: Perspectival Parts and Powers,
Non-perspectival Modal Patterns-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Winning J. Pages: 1385 - 1409 Abstract: AbstractAny successful account of the metaphysics of mechanistic causation must satisfy at least five key desiderata. In this article, I lay out these five desiderata and explain why existing accounts of the metaphysics of mechanistic causation fail to satisfy them. I then present an alternative account that does satisfy the five desiderata. According to this alternative account, we must resort to a type of ontological entity that is new to metaphysics, but not to science: constraints. In this article, I explain how a constraints-based metaphysics fits best with the emerging consensus on the nature of mechanistic explanation. 1Introduction2Renormalizability2.1The first two desiderata: Intrinsicness and productivity2.2The third desideratum: Scientific validity or non-mysteriousness2.3The fourth desideratum: Directionality2.4The fifth desideratum: Perspectival nature of mechanisms3Constraints and Causation3.1Multi-perspectival realism and causal structure3.2Causal structure as laws3.3Causal structures in analytical mechanics: Constraints3.4A metaphysics inspired by analytical mechanics: Constraints as ontologically primitive modal structures4Constraints and Mechanistic Causal Powers 4.1Inter- versus intra-perspectival categories4.2Mechanistic causal powers are grounded by constraints4.3Intrinsicness and constraints4.4Constraints and productiveness4.5Constraints and directionality5Conclusion PubDate: Tue, 17 Jul 2018 00:00:00 GMT DOI: 10.1093/bjps/axy042 Issue No: Vol. 71, No. 4 (2018)
- Counterfactuals and Explanatory Pluralism
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Khalifa K; Doble G, Millson J. Pages: 1439 - 1460 Abstract: AbstractRecent literature on non-causal explanation raises the question as to whether explanatory monism, the thesis that all explanations submit to the same analysis, is true. The leading monist proposal holds that all explanations support change-relating counterfactuals. We provide several objections to this monist position. 1Introduction2Change-Relating Monism's Three Problems3Dependency and Monism: Unhappy Together4Another Challenge: Counterfactual Incidentalism4.1High-grade necessity4.2Unity in diversity5Conclusion PubDate: Sat, 04 Aug 2018 00:00:00 GMT DOI: 10.1093/bjps/axy048 Issue No: Vol. 71, No. 4 (2018)
- Deterministic Convergence and Strong Regularity
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Nielsen M. Pages: 1461 - 1491 Abstract: AbstractBayesians since Savage ([1972]) have appealed to asymptotic results to counter charges of excessive subjectivity. Their claim is that objectionable differences in prior probability judgements will vanish as agents learn from evidence, and individual agents will converge to the truth. Glymour ([1980]), Earman ([1992]) and others have voiced the complaint that the theorems used to support these claims tell us, not how probabilities updated on evidence will actually behave in the limit, but merely how Bayesian agents believe they will behave, suggesting that the theorems are too weak to underwrite notions of scientific objectivity and intersubjective agreement. I investigate, in a very general framework, the conditions under which updated probabilities actually converge to a settled opinion and the conditions under which the updated probabilities of two agents actually converge to the same settled opinion. I call this mode of convergence ‘deterministic’, and derive results that extend theorems in (Huttegger [2015b]). The results here lead to a simple characterization of deterministic convergence for Bayesian learners and give rise to an interesting argument for what I call ‘strong regularity’, the view that probabilities of non-empty events should be bounded away from zero. 1Introduction2Preliminaries3Deterministic Convergence4Consensus in the Limit5Strong Regularity6ConclusionAppendix PubDate: Thu, 28 Jun 2018 00:00:00 GMT DOI: 10.1093/bjps/axy038 Issue No: Vol. 71, No. 4 (2018)
|