Hybrid journal (It can contain Open Access articles) ISSN (Print) 0007-0882 - ISSN (Online) 1464-3537 Published by Oxford University Press[396 journals]

Authors:Griffiths P; Matthewson J. Pages: 301 - 327 Abstract: AbstractSome ‘naturalist’ accounts of disease employ a biostatistical account of dysfunction, whilst others use a ‘selected effect’ account. Several recent authors have argued that the biostatistical account offers the best hope for a naturalist account of disease. We show that the selected effect account survives the criticisms levelled by these authors relatively unscathed, and has significant advantages over the BST. Moreover, unlike the BST, it has a strong theoretical rationale and can provide substantive reasons to decide difficult cases. This is illustrated by showing how life-history theory clarifies the status of so-called diseases of old age. The selected effect account of function deserves a more prominent place in the philosophy of medicine than it currently occupies. 1 Introduction2 Biostatistical and Selected Effect Accounts of Function3 Objections to the Selected Effect Account 3.1 Boorse 3.2 Kingma 3.3 Hausman 3.4 Murphy and Woolfolk4 Problems for the Biostatistical Account 4.1 Schwartz5 Analysis versus Explication6 Explicating Dysfunction: Life History Theory and Senescence7 Conclusion PubDate: Sat, 08 Oct 2016 00:00:00 GMT DOI: 10.1093/bjps/axw021 Issue No:Vol. 69, No. 2 (2016)

Authors:Weatherall J. Pages: 329 - 350 Abstract: AbstractI argue that the hole argument is based on a misleading use of the mathematical formalism of general relativity. If one is attentive to mathematical practice, I will argue, the hole argument is blocked. 1. Introduction2. A Warmup Exercise3. The Hole Argument4. An Argument from Classical Spacetime Theory5. The Hole Argument Revisited PubDate: Tue, 16 Aug 2016 00:00:00 GMT DOI: 10.1093/bjps/axw012 Issue No:Vol. 69, No. 2 (2016)

Authors:Steele K; Werndl C. Pages: 351 - 375 Abstract: AbstractThis article argues that common intuitions regarding (a) the specialness of ‘use-novel’ data for confirmation and (b) that this specialness implies the ‘no-double-counting rule’, which says that data used in ‘constructing’ (calibrating) a model cannot also play a role in confirming the model’s predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in light of prominent accounts of confirmation of model predictions. We show that on the Bayesian account of confirmation, and also on the standard classical hypothesis-testing account, claims (a) and (b) are not generally true; but for some select cases, it is possible to distinguish data used for calibration from use-novel data, where only the latter confirm. The more specialized classical model-selection methods, on the other hand, uphold a nuanced version of claim (a), but this comes apart from (b), which must be rejected in favour of a more refined account of the relationship between calibration and confirmation. Thus, depending on the framework of confirmation, either the scope or the simplicity of the intuitive position must be revised. 1 Introduction2 A Climate Case Study3 The Bayesian Method vis-à-vis Intuitions4 Classical Tests vis-à-vis Intuitions5 Classical Model-Selection Methods vis-à-vis Intuitions 5.1 Introducing classical model-selection methods 5.2 Two cases6 Re-examining Our Case Study7 Conclusion PubDate: Tue, 30 Aug 2016 00:00:00 GMT DOI: 10.1093/bjps/axw024 Issue No:Vol. 69, No. 2 (2016)

Authors:Ladyman J; Presnell S. Pages: 377 - 420 Abstract: AbstractHomotopy Type Theory (HoTT) is a putative new foundation for mathematics grounded in constructive intensional type theory that offers an alternative to the foundations provided by ZFC set theory and category theory. This article explains and motivates an account of how to define, justify, and think about HoTT in a way that is self-contained, and argues that, so construed, it is a candidate for being an autonomous foundation for mathematics. We first consider various questions that a foundation for mathematics might be expected to answer, and find that many of them are not answered by the standard formulation of HoTT as presented in the ‘HoTT Book’. More importantly, the presentation of HoTT given in the HoTT Book is not autonomous since it explicitly depends upon other fields of mathematics, in particular homotopy theory. We give an alternative presentation of HoTT that does not depend upon ideas from other parts of mathematics, and in particular makes no reference to homotopy theory (but is compatible with the homotopy interpretation), and argue that it is a candidate autonomous foundation for mathematics. Our elaboration of HoTT is based on a new interpretation of types as mathematical concepts, which accords with the intensional nature of the type theory. 1 Introduction2 What Is a Foundation for Mathematics' 2.1 A characterization of a foundation for mathematics 2.2 Autonomy3 The Basic Features of Homotopy Type Theory 3.1 The rules 3.2 The basic ways to construct types 3.3 Types as propositions and propositions as types 3.4 Identity 3.5 The homotopy interpretation4 Autonomy of the Standard Presentation'5 The Interpretation of Tokens and Types 5.1 Tokens as mathematical objects' 5.2 Tokens and types as concepts6 Justifying the Elimination Rule for Identity7 The Foundations of Homotopy Type Theory without Homotopy 7.1 Framework 7.2 Semantics 7.3 Metaphysics 7.4 Epistemology 7.5 Methodology8 Possible Objections to this Account 8.1 A constructive foundation for mathematics' 8.2 What are concepts' 8.3 Isn’t this just Brouwerian intuitionism' 8.4 Duplicated objects 8.5 Intensionality and substitution salva veritate9 Conclusion 9.1 Advantages of this foundation PubDate: Thu, 22 Sep 2016 00:00:00 GMT DOI: 10.1093/bjps/axw006 Issue No:Vol. 69, No. 2 (2016)

Authors:Alexandrova A. Pages: 421 - 445 Abstract: AbstractWell–being, health and freedom are some of the many phenomena of interest to science whose definitions rely on a normative standard. Empirical generalizations about them thus present a special case of value-ladenness. I propose the notion of a ‘mixed claim’ to denote such generalizations. Against the prevailing wisdom, I argue that we should not seek to eliminate them from science. Rather, we need to develop principles for their legitimate use. Philosophers of science have already reconciled values with objectivity in several ways, but none of the existing proposals are suitable for mixed claims. Using the example of the science of well-being, I articulate a conception of objectivity for this science and for mixed claims in general. 1 Introduction2 What Are Mixed Claims'3 Mixed Claims Are Different 3.1 Values as reasons to pursue science 3.2 Values as agenda-setters 3.3 Values as ethical constraints on research protocols 3.4 Values as arbiters between underdetermined theories 3.5 Values as determinants of standards of confirmation 3.6 Values as sources of wishful thinking and fraud4 Mixed Claims Should Stay 4.1 Against Nagel5 The Dangers of Mixed Claims6 The Existing Accounts of Objectivity 6.1 The perils of impartiality7 Objectivity for Mixed Claims8 Three Rules 8.1 Unearth the value presuppositions in methods and measures 8.2 Check if value presuppositions are invariant to disagreements 8.3 Consult the relevant parties9 Conclusion PubDate: Tue, 16 Aug 2016 00:00:00 GMT DOI: 10.1093/bjps/axw027 Issue No:Vol. 69, No. 2 (2016)

Authors:Curiel E. Pages: 447 - 483 Abstract: AbstractI examine the debate between substantivalists and relationalists about the ontological character of spacetime and conclude it is not well posed. I argue that the hole argument does not bear on the debate, because it provides no clear criterion to distinguish the positions. I propose two such precise criteria and construct separate arguments based on each to yield contrary conclusions, one supportive of something like relationalism and the other of something like substantivalism. The lesson is that one must fix an investigative context in order to make such criteria precise, but different investigative contexts yield inconsistent results. I examine questions of existence about spacetime structures other than the spacetime manifold itself to argue that it is more fruitful to focus on pragmatic issues of physicality, a notion that lends itself to several different explications, all of philosophical interest, none privileged a priori over any of the others. I conclude by suggesting an extension of the lessons of my arguments to the broader debate between realists and instrumentalists. 1 Introduction2 The Hole Argument3 Limits of Spacetimes4 Pointless Constructions5 The Debate between Substantivalists and Relationalists6 Existence and Physicality: An Embarassment of Spacetime Structures7 Valedictory Remarks on Realism and Instrumentalism, and the Structure of Our Knowledge of Physics PubDate: Wed, 17 Aug 2016 00:00:00 GMT DOI: 10.1093/bjps/axw014 Issue No:Vol. 69, No. 2 (2016)

Authors:Andersen H. Pages: 485 - 508 Abstract: AbstractA finer-grained delineation of a given explanandum reveals a nexus of closely related causal and non-causal explanations, complementing one another in ways that yield further explanatory traction on the phenomenon in question. By taking a narrower construal of what counts as a causal explanation, a new class of distinctively mathematical explanations pops into focus; Lange’s ([2013]) characterization of distinctively mathematical explanations can be extended to cover these. This new class of distinctively mathematical explanations is illustrated with the Lotka–Volterra equations. There are at least two distinct ways those equations might hold of a system, one of which yields straightforwardly causal explanations, and another that yields explanations that are distinctively mathematical in terms of nomological strength. In the first case, one first picks out a system or class of systems, and finds that the equations hold in a causal–explanatory way. In the second case, one starts with the equations and explanations that must apply to any system of which the equations hold, and only then turns to the world to see of what, if any, systems it does in fact hold. Using this new way in which a model might hold of a system, I highlight four specific avenues by which causal and non-causal explanations can complement one another. 1. Introduction2. Delineating the Boundaries of Causal Explanation 2.1. Why construe causal explanation narrowly' The land of explanation versus grain-focusing 2.2. Reasons to narrow the scope of causal explanation3. Broadening the Scope of Mathematical Explanation4. Lotka–Volterra: Same Model, Different Explanation Types 4.1. General biocide in the Lotka–Volterra model 4.2. Two ways a model can hold, yielding causal versus mathematical explanations5. Four Complementary Relationships between Mathematical and Causal Explanation 5.1. Slight reformulations of explananda 5.2. Causal distortion of idealized mathematical models 5.3. Partial explanations requiring supplementation 5.4. Explanatory dimensionality6. Conclusion PubDate: Wed, 17 Aug 2016 00:00:00 GMT DOI: 10.1093/bjps/axw023 Issue No:Vol. 69, No. 2 (2016)

Authors:Benci V; Horsten L, Wenmackers S. Pages: 509 - 552 Abstract: AbstractNon-Archimedean probability functions allow us to combine regularity with perfect additivity. We discuss the philosophical motivation for a particular choice of axioms for a non-Archimedean probability theory and answer some philosophical objections that have been raised against infinitesimal probabilities in general. 1 Introduction2 The Limits of Classical Probability Theory 2.1 Classical probability functions 2.2 Limitations 2.3 Infinitesimals to the rescue'3 NAP Theory 3.1 First four axioms of NAP 3.2 Continuity and conditional probability 3.3 The final axiom of NAP 3.4 Infinite sums 3.5 Definition of NAP functions via infinite sums 3.6 Relation to numerosity theory4 Objections and Replies 4.1 Cantor and the Archimedean property 4.2 Ticket missing from an infinite lottery 4.3 Williamson’s infinite sequence of coin tosses 4.4 Point sets on a circle 4.5 Easwaran and Pruss5 Dividends 5.1 Measure and utility 5.2 Regularity and uniformity 5.3 Credence and chance 5.4 Conditional probability6 General Considerations 6.1 Non-uniqueness 6.2 InvarianceAppendix PubDate: Thu, 11 Aug 2016 00:00:00 GMT DOI: 10.1093/bjps/axw013 Issue No:Vol. 69, No. 2 (2016)

Authors:Lupher T. Pages: 553 - 576 Abstract: AbstractSome physicists and philosophers argue that unitarily inequivalent representations (UIRs) in quantum field theory (QFT) are mathematical surplus structure. Support for that view, sometimes called ‘algebraic imperialism’, relies on Fell’s theorem and its deployment in the algebraic approach to QFT. The algebraic imperialist uses Fell’s theorem to argue that UIRs are ‘physically equivalent’ to each other. The mathematical, conceptual, and dynamical aspects of Fell’s theorem will be examined. Its use as a criterion for physical equivalence is examined in detail and it is proven that Fell’s theorem does not apply to the vast number of representations used in the algebraic approach. UIRs are not another case of theoretical underdetermination, because they make different predictions about ‘classical’ operators. These results are applied to the Unruh effect where there is a continuum of UIRs to which Fell’s theorem does not apply. 1 Introduction2 Weak Equivalence and Physical Equivalence3 Mathematical Overview of Algebraic Quantum Field Theory4 Fell’s Theorem and Philosophical Responses to Weak Equivalence5 Weak Equivalence in C*-Algebras and W*-Algebras6 Classical Equivalence and Weak Equivalence7 Interlude: Is Weak Equivalence Really Physical Equivalence'8 The Unruh Effect9 Time Evolution and Symmetries10 ConclusionsAppendix PubDate: Thu, 04 Aug 2016 00:00:00 GMT DOI: 10.1093/bjps/axw017 Issue No:Vol. 69, No. 2 (2016)

Authors:Roche W; Shogenji T. Pages: 577 - 604 Abstract: AbstractThis article proposes a new interpretation of mutual information (MI). We examine three extant interpretations of MI by reduction in doubt, by reduction in uncertainty, and by divergence. We argue that the first two are inconsistent with the epistemic value of information (EVI) assumed in many applications of MI: the greater is the amount of information we acquire, the better is our epistemic position, other things being equal. The third interpretation is consistent with EVI, but it is faced with the problem of measure sensitivity and fails to justify the use of MI in giving definitive answers to questions of information. We propose a fourth interpretation of MI by reduction in expected inaccuracy, where inaccuracy is measured by a strictly proper monotonic scoring rule. It is shown that the answers to questions of information given by MI are definitive whenever this interpretation is appropriate, and that it is appropriate in a wide range of applications with epistemic implications. 1 Introduction2 Formal Analyses of the Three Interpretations 2.1 Reduction in doubt 2.2 Reduction in uncertainty 2.3 Divergence3 Inconsistency with Epistemic Value of Information4 Problem of Measure Sensitivity5 Reduction in Expected Inaccuracy6 Resolution of the Problem of Measure Sensitivity 6.1 Alternative measures of inaccuracy 6.2 Resolution by strict propriety 6.3 Range of applications7 Global Scoring Rules8 Conclusion PubDate: Sat, 22 Oct 2016 00:00:00 GMT DOI: 10.1093/bjps/axw025 Issue No:Vol. 69, No. 2 (2016)