for Journals by Title or ISSN
for Articles by Keywords
Followed Journals
Journal you Follow: 0
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover Stat
  [1 followers]  Follow
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Online) 2049-1573
   Published by John Wiley and Sons Homepage  [1597 journals]
  • Hidden Gibbs random fields model selection using Block Likelihood
           Information Criterion
    • Abstract: Performing model selection between Gibbs random fields is a very challenging task. Indeed, because of the Markovian dependence structure, the normalizing constant of the fields cannot be computed using standard analytical or numerical methods. Furthermore, such unobserved fields cannot be integrated out, and the likelihood evaluation is a doubly intractable problem. This forms a central issue to pick the model that best fits an observed data. We introduce a new approximate version of the Bayesian Information Criterion. We partition the lattice into contiguous rectangular blocks, and we approximate the probability measure of the hidden Gibbs field by the product of some Gibbs distributions over the blocks. On that basis, we estimate the likelihood and derive the Block Likelihood Information Criterion (BLIC) that answers model choice questions such as the selection of the dependence structure or the number of latent states. We study the performances of BLIC for those questions. In addition, we present a comparison with ABC algorithms to point out that the novel criterion offers a better trade‐off between time efficiency and reliable results. Copyright © 2016 John Wiley & Sons, Ltd.
      PubDate: 2016-04-21T22:06:41.645376-05:
      DOI: 10.1002/sta4.112
  • Point pattern analysis on a region of a sphere
    • Authors: Thomas Lawrence; Adrian Baddeley, Robin K. Milne, Gopalan Nair
      Abstract: We develop statistical methods for analysing a pattern of points on a region of the sphere, including intensity modelling and estimation, summary functions such as the K function, point process models, and model‐fitting techniques. The methods are demonstrated by analysing a dataset giving the sky positions of galaxies. Copyright © 2016 John Wiley & Sons, Ltd.
      PubDate: 2016-04-13T20:50:46.325464-05:
      DOI: 10.1002/sta4.108
  • Generalized Tikhonov regularization in estimation of ordinary differential
           equations models
    • Abstract: We consider estimation of parameters in models defined by systems of ordinary differential equations (ODEs). This problem is important because many processes in different fields of science are modelled by systems of ODEs. Various estimation methods based on smoothing have been suggested to bypass numerical integration of the ODE system. In this paper, we do not propose another method based on smoothing but show how some of the existing ones can be brought together under one unifying framework. The framework is based on generalized Tikhonov regularization and extremum estimation. We define an approximation of the ODE solution by viewing the system of ODEs as an operator equation and exploiting the connection with regularization theory. Combining the introduced regularized solution with an extremum criterion function provides a general framework for estimating parameters in ODEs, which can handle partially observed systems. If the extremum criterion function is the negative log‐likelihood, then suitable regularized solutions yield estimators that are consistent and asymptotically efficient. The well‐known generalized profiling procedure fits into the proposed framework. Copyright © 2016 John Wiley & Sons, Ltd.
      PubDate: 2016-04-07T00:21:12.77646-05:0
      DOI: 10.1002/sta4.111
  • Interactive graphics for functional data analyses
    • Authors: Julia Wrobel; So Young Park, Ana Maria Staicu, Jeff Goldsmith
      Abstract: Although there are established graphics that accompany the most common functional data analyses, generating these graphics for each dataset and analysis can be cumbersome and time‐consuming. Often, the barriers to visualization inhibit useful exploratory data analyses and prevent the development of intuition for a method and its application to a particular dataset. The refund.shiny package was developed to address these issues for several of the most common functional data analyses. After conducting an analysis, the plot_shiny() function is used to generate an interactive visualization environment that contains several distinct graphics, many of which are updated in response to user input. These visualizations reduce the burden of exploratory analyses and can serve as a useful tool for the communication of results to non‐statisticians. Copyright © 2016 John Wiley & Sons, Ltd.
      PubDate: 2016-03-31T11:42:13.00238-05:0
      DOI: 10.1002/sta4.109
  • Uncovering smartphone usage patterns with multi‐view mixed
           membership models
    • Authors: Seppo Virtanen; Mattias Rost, Alistair Morrison, Matthew Chalmers, Mark Girolami
      Abstract: We present a novel class of mixed membership models for combining information from multiple data sources inferring inter‐view and intra‐view statistical associations. An important contemporary application of this work is the meaningful synthesis of data sources corresponding to smartphone application usage, app developers' descriptions and customer feedback. We demonstrate the ability of the model to infer meaningful, interpretable and informative app usage patterns based on the app usage data augmented with rich text data describing the apps. We provide quantitative model evaluations showing the model provides significantly better predictive ability than comparative related existing methods. © 2016 The
      Authors . Stat Published by John Wiley & Sons Ltd
      PubDate: 2016-01-21T19:05:00.383077-05:
      DOI: 10.1002/sta4.103
  • Correlated components
    • Authors: Trevor F. Cox; David S. Arnold
      Abstract: Principal components analysis is a much used and practical technique for analysing multivariate data, finding a particular set of linear compounds of the variables under consideration, such that covariances between all pairs are 0. An alternative view is that when the variables are considered as axes in a Cartesian coordinate system, then principal components analysis is the particular orthogonal rotation of the axes that makes all the pairwise covariances equal to 0. It is this view that is taken here, but instead of finding the rotation that makes all covariances equal to 0, an orthogonal rotation is found that maximizes the sum of the covariances. The rotation is not unique, except for the two or three component case, and so another criterion can be used alongside so that it too can also be optimized. The motivation is that two highly correlated components will tend to measure the same latent variable but with interesting differences because of the orthogonality between them. Theory is given for identifying the correlated components as well as algorithms for finding them. Two illustrative examples are provided, one involving gene expression data and the other consumer questionnaire data. Copyright © 2016 John Wiley & Sons, Ltd.
      PubDate: 2016-01-17T21:11:35.251747-05:
      DOI: 10.1002/sta4.99
  • A genome-wide association study of multiple longitudinal traits with
           related subjects
    • Authors: Yubin Sung; Zeny Feng, Sanjeena Subedi
      Abstract: Pleiotropy is a phenomenon that a single gene inflicts multiple correlated phenotypic effects, often characterized as traits, involving multiple biological systems. We propose a two-stage method to identify pleiotropic effects on multiple longitudinal traits from a family-based data set. The first stage analyses each longitudinal trait via a three-level mixed-effects model. Random effects at the subject-level and at the family-level measure the subject-specific genetic effects and between-subjects intraclass correlations within families, respectively. The second stage performs a simultaneous association test between a single nucleotide polymorphism and all subject-specific effects for multiple longitudinal traits. This is performed using a quasi-likelihood scoring method in which the correlation structure among related subjects is adjusted. Two simulation studies for the proposed method are undertaken to assess both the type I error control and the power. Furthermore, we demonstrate the utility of the two-stage method in identifying pleiotropic genes or loci by analyzing the Genetic Analysis Workshop 16 Problem 2 cohort data drawn from the Framingham Heart Study and illustrate an example of the kind of complexity in data that can be handled by the proposed approach. We establish that our two-stage method can identify pleiotropic effects while accommodating varying data types in the model. Copyright © 2016 John Wiley & Sons, Ltd.
      PubDate: 2016-01-12T20:24:55.293787-05:
      DOI: 10.1002/sta4.102
  • Estimation, filtering and smoothing in the stochastic conditional duration
           model: an estimating function approach
    • Authors: Ramanathan Thekke; Anuj Mishra, Bovas Abraham
      Abstract: Stochastic conditional duration models are widely used in the financial econometrics literature to model the duration between transactions in a financial market. Even though there are developments in terms of modelling aspects, estimation, filtering and smoothing are still being investigated by researchers in this area. Almost all the existing procedures are highly computational intensive because of the complexity of the likelihood function. In this paper, we suggest a new procedure for estimation, filtering and smoothing in stochastic conditional duration models, based on estimating functions. Simulation studies indicate that the suggested procedure performs well and also fast in terms of computation. Copyright © 2016 John Wiley & Sons, Ltd.
      PubDate: 2016-01-12T19:21:32.593463-05:
      DOI: 10.1002/sta4.101
  • Confidence bands for smoothness in nonparametric regression
    • Authors: Julian Faraway
      Abstract: The choice of the smoothing parameter in nonparametric regression is critical to the form of the estimated curve and any inference that follows. Many methods are available that will generate a single choice for this parameter. Here, we argue that the considerable uncertainty in this choice should be explicitly represented. The construction of standard simultaneous confidence bands in nonparametric regression often requires difficult mathematical arguments. We question their practical utility, presenting several deficiencies. We propose a new kind of confidence band that reflects the uncertainty regarding the smoothness of the estimate. Copyright © 2016 John Wiley & Sons, Ltd.
      PubDate: 2016-01-10T23:24:47.684078-05:
      DOI: 10.1002/sta4.100
  • Issue Information
    • Pages: 1 - 3
      Abstract: No abstract is available for this article.
      PubDate: 2016-02-22T07:45:50.590087-05:
      DOI: 10.1002/sta4.90
  • Exploiting the quantile optimality ratio in finding confidence intervals
           for quantiles
    • Authors: Luke A. Prendergast; Robert G. Staudte
      Pages: 70 - 81
      Abstract: A standard approach to confidence intervals for quantiles requires good estimates of the quantile density. The optimal bandwidth for kernel estimation of the quantile density depends on an underlying location‐scale family only through the quantile optimality ratio (QOR), which is the starting point for our results. While the QOR is not distribution‐free, it turns out that what is optimal for one family often works quite well for families having similar shape. This allows one to rely on a single representative QOR if one has a rough idea of the distributional shape. Another option that we explore assumes the data can be modelled by the highly flexible generalized lambda distribution (GLD), already studied by others, and we show that using the QOR for the estimated GLD can lead to more than competitive intervals. Effective confidence intervals for the difference between quantiles from independent populations is a byproduct. Copyright © 2016 John Wiley & Sons, Ltd.
      PubDate: 2016-02-28T22:43:17.215023-05:
      DOI: 10.1002/sta4.105
  • A note on automatic data transformation
    • Authors: Qing Feng; Jan Hannig, J. S. Marron
      Pages: 82 - 87
      Abstract: Modern data analysis frequently involves variables with highly non‐Gaussian marginal distributions. However, commonly used analysis methods are most effective with roughly Gaussian data. This paper introduces an automatic transformation that improves the closeness of distributions to normality. For each variable, a new family of parametrizations of the shifted logarithm transformation is proposed, which is unique in treating the data as real valued and in allowing transformation for both left and right skewness within the single family. This also allows an automatic selection of the parameter value (which is crucial for high‐dimensional data with many variables to transform) by minimizing the Anderson–Darling test statistic of the transformed data. An application to image features extracted from melanoma microscopy slides demonstrates the utility of the proposed transformation in addressing data with excessive skewness, heteroscedasticity and influential observations. Copyright © 2016 John Wiley & Sons, Ltd.
      PubDate: 2016-03-01T22:21:34.435768-05:
      DOI: 10.1002/sta4.104
  • Variable selection in function‐on‐scalar regression
    • Authors: Yakuan Chen; Jeff Goldsmith, R. Todd Ogden
      Pages: 88 - 101
      Abstract: For regression models with functional responses and scalar predictors, it is common for the number of predictors to be large. Despite this, few methods for variable selection exist for function‐on‐scalar models, and none account for the inherent correlation of residual curves in such models. By expanding the coefficient functions using a B‐spline basis, we pose the function‐on‐scalar model as a multivariate regression problem. Spline coefficients are grouped within coefficient function, and group‐minimax concave penalty is used for variable selection. We adapt techniques from generalized least squares to account for residual covariance by “pre‐whitening” using an estimate of the covariance matrix and establish theoretical properties for the resulting estimator. We further develop an iterative algorithm that alternately updates the spline coefficients and covariance; simulation results indicate that this iterative algorithm often performs as well as pre‐whitening using the true covariance and substantially outperforms methods that neglect the covariance structure. We apply our method to two‐dimensional planar reaching motions in a study of the effects of stroke severity on motor control and find that our method provides lower prediction errors than competing methods. Copyright © 2016 John Wiley & Sons, Ltd.
      PubDate: 2016-03-02T20:33:49.341623-05:
      DOI: 10.1002/sta4.106
  • On the smallest eigenvalues of covariance matrices of multivariate spatial
    • Pages: 102 - 107
      Abstract: There has been a growing interest in providing models for multivariate spatial processes. A majority of these models specify a parametric matrix covariance function. Based on observations, the parameters are estimated by maximum likelihood or variants thereof. While the asymptotic properties of maximum likelihood estimators for univariate spatial processes have been analyzed in detail, maximum likelihood estimators for multivariate spatial processes have not received their deserved attention yet. In this article, we consider the classical increasing‐domain asymptotic setting restricting the minimum distance between the locations. Then, one of the main components to be studied from a theoretical point of view is the asymptotic positive definiteness of the underlying covariance matrix. Based on very weak assumptions on the matrix covariance function, we show that the smallest eigenvalue of the covariance matrix is asymptotically bounded away from zero. Several practical implications are discussed as well. Copyright © 2016 John Wiley & Sons, Ltd.
      PubDate: 2016-03-02T20:41:38.13238-05:0
      DOI: 10.1002/sta4.107
  • Multinomial probit Bayesian additive regression trees
    • Pages: 119 - 131
      Abstract: This article proposes multinomial probit Bayesian additive regression trees (MPBART) as a multinomial probit extension of Bayesian additive regression trees. MPBART is flexible to allow inclusion of predictors that describe the observed units as well as the available choice alternatives. Through two simulation studies and four real data examples, we show that MPBART exhibits very good predictive performance in comparison with other discrete choice and multiclass classification methods. To implement MPBART, the R package mpbart is freely available from CRAN repositories. Copyright © 2016 John Wiley & Sons, Ltd.
      PubDate: 2016-04-04T14:40:44.051215-05:
      DOI: 10.1002/sta4.110
  • Wiley-Blackwell Announces Launch of Stat – The ISI's Journal for the
           Rapid Dissemination of Statistics Research
    • PubDate: 2012-04-17T04:34:14.600281-05:
      DOI: 10.1002/sta4.1
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2015