for Journals by Title or ISSN
for Articles by Keywords
Followed Journals
Journal you Follow: 0
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover   Stat
  [2 followers]  Follow
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Online) 2049-1573
   Published by John Wiley and Sons Homepage  [1597 journals]
  • A mutual information approach to calculating nonlinearity
    • Authors: Reginald Smith
      Abstract: A new method to measure nonlinear dependence between two variables is described using mutual information to analyse the separate linear and nonlinear components of dependence. This technique, which gives an exact value for the proportion of linear dependence, is then compared with another common test for linearity, the Brock, Dechert and Scheinkman test. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-11-24T21:17:27.580302-05:
      DOI: 10.1002/sta4.96
  • The role of regimes in short‐term wind speed forecasting at multiple
           wind farms
    • Authors: Karen Kazor; Amanda S. Hering
      Abstract: Large‐scale integration of wind energy into electric utility systems requires accurate short‐term wind speed forecasts. At these horizons, statistical models that account for spatial and temporal information have demonstrated improved accuracy over both physical models and statistical models that ignore spatial information. Off‐site information can be incorporated by modelling wind speeds conditional on a set of regimes that capture the predominant wind patterns within a geographic region. Identifying these regimes is a crucial model‐building step. Herein, we propose a new forecasting method that relies on regimes identified by fitting a Gaussian mixture model (GMM) to the wind vector, and we build regimes based on a single site, a local average of sites, and a region‐wide average. We compare the performance of the models with GMM‐identified regimes with three state‐of‐the‐art reference models that each account for wind regimes differently. The models are evaluated at 30‐minute, 1‐hour, and 2‐hour ahead horizons at ten sites across the Pacific Northwest. GMM regimes based on local information produce the best forecasts and have a significantly improved accuracy at a region‐wide level over the state‐of‐the‐art models. Even greater improvements are achieved when an average of the forecasts produced by each method is constructed. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-10-26T23:42:29.207529-05:
      DOI: 10.1002/sta4.91
  • Spatio‐temporal change of support with application to American
           Community Survey multi‐year period estimates
    • Authors: Jonathan R. Bradley; Christopher K. Wikle, Scott H. Holan
      Abstract: We present hierarchical Bayesian methodology to perform spatio‐temporal change of support (COS) for survey data with Gaussian sampling errors. This methodology is motivated by the American Community Survey (ACS), which is an ongoing survey administered by the US Census Bureau that provides timely information on several key demographic variables. The ACS has published 1‐year, 3‐year, and 5‐year period estimates, and margins of errors, for demographic and socio‐economic variables recorded over predefined geographies. The spatio‐temporal COS methodology considered here provides data users with a way to estimate ACS variables on customized geographies and time periods while accounting for sampling errors. Additionally, 3‐year ACS period estimates are to be discontinued, and this methodology can provide predictions of ACS variables for 3‐year periods given the available period estimates. The methodology is based on a spatio‐temporal mixed‐effects model with a low‐dimensional spatio‐temporal basis function representation, which provides multi‐resolution estimates through basis function aggregation in space and time. This methodology includes a novel parameterization that uses a target dynamical process and recently proposed parsimonious Moran's I propagator structures. Our approach is demonstrated through two applications using public‐use ACS estimates and is shown to produce good predictions on a hold‐out set of 3‐year period estimates. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-10-06T21:33:58.461836-05:
      DOI: 10.1002/sta4.94
  • Examining statistical disclosure issues involving digital images of ROC
    • Authors: Gregory J. Matthews; Ofer Harel
      Abstract: It has been established that knowing the true values of the empirical receiver operating characteristic (ROC) curve (i.e. false‐positive and true‐positive rate pairs for all thresholds) along with a subset of the full data set consisting of n − 1 observations can cause unwanted disclosures. Here, we explore a similar problem with two main extensions. First, rather than knowledge of the true values of the empirical ROC curve, we start only with an image of the empirical ROC curve. Second, rather than considering only subsets of n − 1, we look at several differently sized subsets. Given this information (i.e. empirical ROC image and a subset of the full data set), we experimentally act as a data snooper and explore what can be learned about unobserved portions of the full data set. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-10-01T22:39:51.116117-05:
      DOI: 10.1002/sta4.93
  • Zeros and ones: a case for suppressing zeros in sensitive count data with
           an application to stroke mortality
    • Authors: Harrison Quick; Scott H. Holan, Christopher K. Wikle
      Abstract: In the current era of global internet connectivity, privacy concerns are of the utmost importance. When official statistical agencies collect spatially referenced, confidential data that they intend to release as public‐use files, the suppression of small counts is a common measure that agencies take to protect the confidentiality of the data‐subjects from ill‐intentioned users. The goal of this paper is to demonstrate that an interval suppression criterion that does not suppress zeros can fail to protect regions with a single occurrence. We illustrate the difference in disclosure risk between an interval suppression criterion and a one‐sided suppression criterion by considering a US county‐level dataset composed of the number of deaths due to stroke in White men. Here, we illustrate that an interval suppression criterion leads to a twofold increase in the disclosure risk when compared with a one‐sided suppression criterion for regions with a single incidence among a population of less than 600. We conclude with an extension of these findings beyond stroke mortality and by offering general guidelines for data suppression. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-09-21T21:52:47.016667-05:
      DOI: 10.1002/sta4.92
  • Figures of merit for simultaneous inference and comparisons in simulation
    • Authors: Noel Cressie; Sandy Burden
      Abstract: This article considers the traditional figures of merit, namely, bias and mean squared (prediction) error, which are typically used to evaluate simulation experiments. We propose functions of them that account for different variables' units; these alternative figures of merit are closely tied to simultaneous multivariate inference on an unknown parameter vector or unknown state vector. Their usefulness is illustrated in a simulation experiment, where the goal is to determine the statistical properties associated with prediction of a multivariate state. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-08-06T22:39:50.972665-05:
      DOI: 10.1002/sta4.88
  • Accelerated non‐parametrics for cascades of Poisson processes
    • Authors: Chris J. Oates
      Abstract: Cascades of Poisson processes are probabilistic models for spatio‐temporal phenomena in which (i) previous events may trigger subsequent events and (ii) both the background and triggering processes are conditionally Poisson. Such phenomena are typically “data rich but knowledge poor,” in the sense that large datasets are available, yet a mechanistic understanding of the background and triggering processes that generate the data is unavailable. In these settings, non‐parametric estimation plays a central role. However, existing non‐parametric estimators have computational and storage complexity O(N2), precluding their application on large datasets. Here, by assuming the triggering process acts only locally, we derive non‐parametric estimators with computational complexity O(NlogN) and storage complexity O(N). Our approach automatically learns the domain of the triggering process from data and is essentially free from hyperparameters. The methodology is applied to a large seismic dataset where estimation under existing algorithms would be infeasible. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-08-06T12:32:10.741911-05:
      DOI: 10.1002/sta4.87
  • Random effects model for bias estimation: higher‐order asymptotic
    • Authors: Andrew L. Rukhin
      Abstract: A common issue in physical, chemical and biometrical applications is to validate a laboratory's method. For that purpose, a lab performs measurements on a certified reference material with a given coverage interval. These reference materials are a major tool for assuring quality and reliability of results obtained by a lab in analysis and testing. Assuming that the measurand is random with a normal distribution whose parameters are obtained from the reference material certificate, new remarkably accurate confidence intervals for the bias are derived. These procedures are based on modern higher‐order asymptotic statistical methods. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
      PubDate: 2015-05-31T21:02:51.971205-05:
      DOI: 10.1002/sta4.82
  • Multivariate spatial hierarchical Bayesian empirical likelihood methods
           for small area estimation
    • Authors: Aaron T. Porter; Scott H. Holan, Christopher K. Wikle
      Abstract: Recent advances in small area estimation incorporating both explicit spatial autocorrelation and empirical likelihood techniques have produced estimates with greater precision. Furthermore, the multivariate Fay–Herriot models take advantage of within‐location correlation between multiple outcomes for a set of small areas. We extend the Fay–Herriot model by utilizing empirical likelihood techniques to the spatially explicit multivariate setting. We then model the five‐year period estimates from the American Community Survey (2006–10) of percent of unemployed individuals and percent of families in poverty for the counties of Missouri. We demonstrate bivariate reduction in leave‐one‐out median absolute deviation over an approximately equivalently specified parametric model. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-05-04T22:50:51.111952-05:
      DOI: 10.1002/sta4.81
  • A new weighted likelihood approach
    • Authors: Adhidev Biswas; Tania Roy, Suman Majumder, Ayanendranath Basu
      Abstract: In this paper, we propose a new weighted likelihood procedure. Here, the weights are suitably calibrated functions of appropriately described residuals at each data point. The residuals describe the match (or mismatch) between the empirical distribution function and the model distribution function. If the match is high, the observation is considered to be a regular observation. But for large (in magnitude) residuals, there is a mismatch, and the corresponding likelihood score function may require downweighting in order to obtain a robust solution. As there is little or no downweighting for observations where there is no evidence of mismatch, asymptotically, we expect that there will be no downweighting under the pure model leading to highly efficient estimators. On the other hand, properly calibrated weight functions that penalize the observations with large residuals will lead to highly robust solutions under model misspecification and the presence of outliers. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-04-21T02:13:36.208644-05:
      DOI: 10.1002/sta4.80
  • Optimal sample planning for system state analysis with partial data
    • Authors: Martin Heller; Jan Hannig, Malcolm R. Leadbetter
      Abstract: We develop optimal and computationally practical procedures to minimize uncertainty concerning the presence of dangerous levels of a contaminant within a building when neither replication nor complete data collection is feasible. More generally, we address inference about the state of a finite system when the state is related to information collected over components of the system when only partial data collection is feasible. When there is no correlation between sample locations, a simple random sample or maximum a priori trait presence would provide optimal sampling choices. When complicated probability models describe trait manifestation, the need to collect only partial data precludes a full fitting of complicated models, and one must rely heavily on prior information naturally leading to a Bayesian approach. Herein, we introduce a computationally efficient heuristic algorithm to simultaneously find optimal sample locations and decision rule parameterizations and then show that it drastically outperforms both random selection and maximum a priori methods. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-03-27T04:04:11.619384-05:
      DOI: 10.1002/sta4.79
  • Non‐parametric Bayes to infer playing strategies adopted in a
           population of mobile gamers
    • Authors: Seppo Virtanen; Mattias Rost, Matthew Higgs, Alistair Morrison, Matthew Chalmers, Mark Girolami
      Abstract: Analysis of trace logging data collections of interactions of a heterogenous and diverse population of consumers of digital software with mobile devices provides unprecedented possibilities for understanding how software is actually used and for finding recurring patterns of software usage over the population that are exhibited to a greater or lesser degree in each individual software user. In this work, we consider an elementary mobile game played by a population of mobile gamers and collect pieces of game sessions over an extended period, resulting in a collection of users' trace logs for multiple sessions. We develop a simple, yet flexible, non‐parametric Bayes approach to infer playing strategies adopted in the population from the logged traces of game interactions. We demonstrate that our approach finds interpretable strategies and provides good predictive performance compared with alternative modelling assumptions using a non‐parametric Bayes framework. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-03-04T03:59:11.414744-05:
      DOI: 10.1002/sta4.75
  • Unbiased regression estimation under correlated linkage errors
    • Authors: Gunky Kim; Raymond Chambers
      Abstract: Linkage errors can occur when probability‐based methods are used to link records from two or more distinct data sets corresponding to the same target population. Recent research on allowing for these errors when carrying out regression analysis based on linked data assumes that the linkage errors are independent when more than two data sets are used to generate these data. In this paper, we extend these results to accommodate the more realistic scenario of dependent linkage errors. Our simulation results show that an incorrect assumption of independent linkage errors can lead to insufficient linkage error bias correction, while an approach that allows for correlated linkage errors appears to overcome this problem. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-03-02T06:49:03.886206-05:
      DOI: 10.1002/sta4.76
  • Spanifold: spanning tree flattening onto lower dimension
    • Authors: Shoja'eddin Chenouri; Petr Kobelevskiy, Christopher G. Small
      Abstract: Dimensionality reduction and manifold learning techniques attempt to recover a lower‐dimensional submanifold from the data as encoded in high dimensions. Many techniques, linear or non‐linear, have been introduced in the literature. Standard methods, such as Isomap and local linear embedding, map the high‐dimensional data points into a low dimension so as to globally minimize a so‐called energy function, which measures the mismatch between the precise geometry in high dimensions and the approximate geometry in low dimensions. However, the local effects of such minimizations are often unpredictable, because the energy minimization algorithms are global in nature. In contrast to these methods, the Spanifold algorithm of this paper constructs a tree on the manifold and flattens the manifold in such a way as to approximately preserve pairwise distance relationships within the tree. The vertices of this tree are the data points, and the edges of the tree form a subset of the edges of the nearest‐neighbour graph on the data. In addition, the pairwise distances between data points close to the root of the tree undergo minimal distortion as the data are flattened. This allows the user to design the flattening algorithm so as to approximately preserve neighbour relationships in any chosen local region of the data. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-02-23T04:25:55.709429-05:
      DOI: 10.1002/sta4.74
  • Issue Information
    • Abstract: No abstract is available for this article.
      PubDate: 2015-02-16T02:36:03.549675-05:
      DOI: 10.1002/sta4.63
  • Correcting for non‐ignorable missingness in smoking trends
    • Abstract: Data missing not at random (MNAR) are a major challenge in survey sampling. We propose an approach based on registry data to deal with non‐ignorable missingness in health examination surveys. The approach relies on follow‐up data available from administrative registers several years after the survey. For illustration, we use data on smoking prevalence in Finnish National FINRISK study conducted in 1972–97. The data consist of measured survey information including missingness indicators, register‐based background information and register‐based time‐to‐disease survival data. The parameters of missingness mechanism are estimable with these data although the original survey data are MNAR. The underlying data generation process is modelled by a Bayesian model. The results indicate that the estimated smoking prevalence rates in Finland may be significantly affected by missing data. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-01-29T22:52:50.683143-05:
      DOI: 10.1002/sta4.73
  • On sparse representation for optimal individualized treatment selection
           with penalized outcome weighted learning
    • Authors: Rui Song; Michael Kosorok, Donglin Zeng, Yingqi Zhao, Eric Laber, Ming Yuan
      Pages: 59 - 68
      Abstract: As a new strategy for treatment, which takes individual heterogeneity into consideration, personalized medicine is of growing interest. Discovering individualized treatment rules for patients who have heterogeneous responses to treatment is one of the important areas in developing personalized medicine. As more and more information per individual is being collected in clinical studies and not all of the information is relevant for treatment discovery, variable selection becomes increasingly important in discovering individualized treatment rules. In this article, we develop a variable selection method based on penalized outcome weighted learning through which an optimal treatment rule is considered as a classification problem where each subject is weighted proportional to his or her clinical outcome. We show that the resulting estimator of the treatment rule is consistent and establish variable selection consistency and the asymptotic distribution of the estimators. The performance of the proposed approach is demonstrated via simulation studies and an analysis of chronic depression data. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-03-06T01:43:09.313932-05:
      DOI: 10.1002/sta4.78
  • Visuanimation in statistics
    • Pages: 81 - 96
      Abstract: This paper explores the use of visualization through animations, coined visuanimation, in the field of statistics. In particular, it illustrates the embedding of animations in the paper itself and the storage of larger movies in the online supplemental material. We present results from statistics research projects using a variety of visuanimations, ranging from exploratory data analysis of image data sets to spatio‐temporal extreme event modelling; these include a multiscale analysis of classification methods, the study of the effects of a simulated explosive volcanic eruption and an emulation of climate model output. This paper serves as an illustration of visuanimation for future publications in Stat. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-04-14T02:35:06.1195-05:00
      DOI: 10.1002/sta4.77
  • A family of likelihood functions to make inferences about the reliability
           parameter for many stress‐strength distributions
    • Pages: 117 - 129
      Abstract: Many research papers in statistical literature address the estimation of the reliability parameter in stress‐strength models, considering different types of distributions for stress and for strength. We have found that for many of these distributions, their corresponding profile likelihood functions of the reliability parameter can be grouped in a family of likelihood functions, with a simple algebraic structure that facilitates making inferences about this parameter. The novel family of likelihood functions, proposed here, maximum likelihood estimation procedures and suitable reparameterizations, were used to obtain a simple closed‐form expression for the likelihood confidence interval of the reliability parameter. This new approach is particularly useful when small and/or unequal sample sizes are involved. Simulation studies for some distributions were carried out to illustrate the performance of the likelihood confidence intervals for the reliability parameter, and adequate coverage frequencies were obtained. The simplicity of our unifying proposal is shown here using three stress‐strength distributions that have been analysed individually in statistical literature. However, there are many distributions for which inferences about the reliability parameter could be easily obtained using the proposed family. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-05-27T02:16:01.221211-05:
      DOI: 10.1002/sta4.83
  • Modelling space–time varying ENSO teleconnections to droughts in
           North America
    • Authors: InKyung Choi; Bo Li, Hao Zhang, Yun Li
      Pages: 140 - 156
      Abstract: Teleconnection in atmospheric science refers to a significant correlation between climate anomalies in widely separated regions (typically thousands of kilometres), and it is often considered to be responsible for extreme weather conditions occurring simultaneously over large distances. In this paper, we study the influence of El Niño‐Southern Oscillation teleconnection on meteorological droughts represented by the Palmer severity drought index across North America from 1870 to 1990. We develop a flexible statistical framework based on spatial random effects to model the covariance (teleconnection) between winter (October–March) sea surface temperature in the tropical Pacific and summer (June–August) droughts in North America. Our model allows us to analyse the dynamic pattern of teleconnection over space and time, and results indicate that the influence of El Niño‐Southern Oscillation teleconnections on droughts varies spatially and temporally across North America. We further provide the time‐varying teleconnection estimates with their uncertainties for 12 subregions in North America. Copyright ©2015 John Wiley & Sons, Ltd.
      PubDate: 2015-06-09T19:30:07.090421-05:
      DOI: 10.1002/sta4.85
  • Preconditioning for classical relationships: a note relating ridge
           regression and OLS p‐values to preconditioned sparse penalized
    • Authors: Karl Rohe
      Pages: 157 - 166
      Abstract: When the design matrix has orthonormal columns, “soft thresholding” the ordinary least squares solution produces the Lasso solution. If one uses the Puffer preconditioned Lasso, then this result generalizes from orthonormal designs to full rank designs (Theorem 1). Theorem 2 refines the Puffer preconditioner to make the Lasso select the same model as removing the elements of the ordinary least squares solution with the largest p‐values. Using a generalized Puffer preconditioner, Theorem 3 relates ridge regression to the preconditioned Lasso; this result is for the high‐dimensional setting, p > n. Where the standard Lasso is akin to forward selection, Theorems 1, 2, and 3 suggest that the preconditioned Lasso is more akin to backward elimination. These results hold for sparse penalties beyond; for a broad class of sparse and non‐convex techniques (e.g. SCAD and MC+), the results hold for all local minima. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-06-09T19:30:52.330174-05:
      DOI: 10.1002/sta4.86
  • Covariance models on the surface of a sphere: when does it matter?
    • Authors: Jaehong Jeong; Mikyoung Jun
      Pages: 167 - 182
      Abstract: There is a growing interest in developing covariance functions for processes on the surface of a sphere because of the wide availability of data on the globe. Utilizing the one‐to‐one mapping between the Euclidean distance and the great circle distance, isotropic and positive definite functions in a Euclidean space can be used as covariance functions on the surface of a sphere. This approach, however, may result in physically unrealistic distortion on the sphere especially for large distances. We consider several classes of parametric covariance functions on the surface of a sphere, defined with either the great circle distance or the Euclidean distance, and investigate their impact upon spatial prediction. We fit several isotropic covariance models to simulated data as well as real data from National Center for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis on the sphere. We demonstrate that covariance functions originally defined with the Euclidean distance may not be adequate for some global data. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-06-10T20:22:26.94822-05:0
      DOI: 10.1002/sta4.84
  • Longitudinal functional data analysis
    • Pages: 212 - 226
      Abstract: We consider dependent functional data that are correlated because of a longitudinal‐based design: each subject is observed at repeated times and at each time, a functional observation (curve) is recorded. We propose a novel parsimonious modelling framework for repeatedly observed functional observations that allows to extract low‐dimensional features. The proposed methodology accounts for the longitudinal design, is designed to study the dynamic behaviour of the underlying process, allows prediction of full future trajectory and is computationally fast. Theoretical properties of this framework are studied, and numerical investigations confirm excellent behaviour in finite samples. The proposed method is motivated by and applied to a diffusion tensor imaging study of multiple sclerosis. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-08-24T21:53:57.461007-05:
      DOI: 10.1002/sta4.89
  • The perils of quasi‐likelihood information criteria
    • Authors: Yishu Wang; Orla Murphy, Maxime Turgeon, ZhuoYu Wang, Sahir R. Bhatnagar, Juliana Schulz, Erica E. M. Moodie
      Pages: 246 - 254
      Abstract: In this paper, we consider some potential pitfalls of the growing use of quasi‐likelihood‐based information criteria for longitudinal data to select a working correlation structure in a generalized estimating equation framework. In particular, we examine settings where the fully conditional mean does not equal the marginal mean as well as hypothesis testing following selection of the working correlation matrix. Our results suggest that the use of any information criterion for selection of the working correlation matrix is inappropriate when the conditional mean model assumption is violated. We also find that type I error differs from the nominal level in moderate sample sizes following selection of the form of the working correlation but improves as sample size is increased as the selection is then concentrated on a single correlation structure. Our results serve to underline the potential dangers that can arise when using information criteria to select correlation structure in routine data analysis. Copyright © 2015 John Wiley & Sons, Ltd.
      PubDate: 2015-10-04T21:37:34.138179-05:
      DOI: 10.1002/sta4.95
  • Wiley‐Blackwell Announces Launch of Stat – The ISI's Journal
           for the Rapid Dissemination of Statistics Research
    • PubDate: 2012-04-17T04:34:14.600281-05:
      DOI: 10.1002/sta4.1
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2015