for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Jurnals are published.
Already have an account? Sign In to see the journals you follow.
Stat    [3 followers]  Follow    
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
     ISSN (Online) 2049-1573
     Published by John Wiley and Sons Homepage  [1594 journals]
  • Surface boxplots
    • Authors: Marc G. Genton; Christopher Johnson, Kristin Potter, Georgiy Stenchikov, Ying Sun
      Pages: n/a - n/a
      Abstract: In this paper, we introduce a surface boxplot as a tool for visualization and exploratory analysis of samples of images. First, we use the notion of volume depth to order the images viewed as surfaces. In particular, we define the median image. We use an exact and fast algorithm for the ranking of the images. This allows us to detect potential outlying images that often contain interesting features not present in most of the images. Second, we build a graphical tool to visualize the surface boxplot and its various characteristics. A graph and histogram of the volume depth values allow us to identify images of interest. The code is available in the supporting information of this paper. We apply our surface boxplot to a sample of brain images and to a sample of climate model outputs. Copyright © 2014 John Wiley & Sons Ltd.
      PubDate: 2014-01-22T21:06:22.569842-05:
      DOI: 10.1002/sta4.39
       
  • Ricean over Gaussian modelling in magnitude fMRI analysis—added
           complexity with negligible practical benefits
    • Authors: Daniel W. Adrian; Ranjan Maitra, Daniel B. Rowe
      Pages: n/a - n/a
      Abstract: It is well known that Gaussian modelling of functional magnetic resonance imaging (fMRI) magnitude time‐course data, which are truly Rice distributed, constitutes an approximation, especially at low signal‐to‐noise ratios (SNRs). Based on this fact, previous work has argued that Rice‐based activation tests show superior performance over their Gaussian‐based counterparts at low SNRs and should be preferred in spite of the attendant additional computational and estimation burden. Here, we revisit these past studies and, after identifying and removing their underlying limiting assumptions and approximations, provide a more comprehensive comparison. Our experimental evaluations using Receiver Operating Characteristic (ROC) curve methodology show that tests derived using Ricean modelling are substantially superior over the Gaussian‐based activation tests only for SNRs below 0.6, that is, SNR values far lower than those encountered in fMRI as currently practiced.Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-12-08T22:31:39.657847-05:
      DOI: 10.1002/sta4.34
       
  • Space–time clustering and the permutation moments of quadratic forms
    • Authors: Yi‐Hui Zhou; Gregory Mayhew, Zhibin Sun, Xiaolin Xu, Fei Zou, Fred A. Wright
      Pages: n/a - n/a
      Abstract: The Mantel and Knox space–time clustering statistics are popular tools to establish transmissibility of a disease and detect outbreaks. The most commonly used null distributional approximations may provide poor fits, and researchers often resort to direct sampling from the permutation distribution. However, the exact first four moments for these statistics are available, and Pearson distributional approximations are often effective. Thus, our first goals are to clarify the literature and make these tools more widely available. In addition, by rewriting terms in the statistics, we obtain the exact first four permutation moments for the most commonly used quadratic form statistics, which need not be positive definite. The extension of this work to quadratic forms greatly expands the utility of density approximations for these problems, including for high‐dimensional applications, where the statistics must be extreme in order to exceed stringent testing thresholds. We demonstrate the methods using examples from the investigation of disease transmission in cattle, the association of a gene expression pathway with breast cancer survival, regional genetic association with cystic fibrosis lung disease and hypothesis testing for smoothed local linear regression. © The
      Authors . Stat published by John Wiley & Sons Ltd.
      PubDate: 2013-11-29T01:57:56.107128-05:
      DOI: 10.1002/sta4.37
       
  • A scale space multiresolution method for extraction of time series
           features
    • Authors: Leena Pasanen; Ilkka Launonen, Lasse Holmström
      Pages: n/a - n/a
      Abstract: A scale space multiresolution feature extraction method is proposed for time series data. The method detects intervals where time series features differ from their surroundings, and it produces a multiresolution analysis of the series as a sum of scale‐dependent components. These components are obtained from differences of smooths. The relevant sequence of smoothing levels is determined using derivatives of smooths with respect to the logarithm of the smoothing parameter. As time series are usually noisy, the method uses Bayesian inference to establish the credibility of the components. © The
      Authors . Stat published by John Wiley & Sons Ltd.
      PubDate: 2013-11-28T01:38:42.261887-05:
      DOI: 10.1002/sta4.35
       
  • Stat–The First Year
    • Authors: Nicholas I Fisher
      Pages: n/a - n/a
      Abstract: Stat, a new statistical journal designed for rapid communication of interesting and novel research, was launched in August 2012. During its first year of operation, 21 articles were published on a wide variety of topics and with theory inspired by a diverse range of applications. Additionally, an associated blog, StatBlog, was established with its own panel of Associate Editors to enable rapid and timely discussion of published articles. This report and an associated post on StatBlog reflect briefly on the establishment of the journal, and on current and emerging issues relating to publication of statistical research. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-11-25T21:27:59.220607-05:
      DOI: 10.1002/sta4.36
       
  • Variable selection for non‐parametric quantile regression via
           smoothing spline analysis of variance
    • Authors: Chen‐Yen Lin; Howard Bondell, Hao Helen Zhang, Hui Zou
      Pages: n/a - n/a
      Abstract: Quantile regression provides a more thorough view of the effect of covariates on a response. Non‐parametric quantile regression has become a viable alternative to avoid restrictive parametric assumption. The problem of variable selection for quantile regression is challenging, as important variables can influence various quantiles in different ways. We tackle the problem via regularization in the context of smoothing spline analysis of variance models. The proposed sparse non‐parametric quantile regression can identify important variables and provide flexible estimates for quantiles. Our numerical study suggests the promising performance of the new procedure in variable selection and function estimation. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-11-12T06:24:00.520588-05:
      DOI: 10.1002/sta4.33
       
  • A new class of semiparametric transformation models based on first hitting
           times by latent degradation processes
    • Authors: Sangbum Choi; Kjell A. Doksum
      Pages: n/a - n/a
      Abstract: In many failure mechanisms, most subjects under study deteriorate physically over time, and thus a depreciation in health may precede failure. A latent stochastic process, called degradation process, may be assumed for modeling such depreciation whereby an event occurs when the process first crosses a threshold. A class of survival regression models can be constructed from the first hitting time of a latent accelerating degradation process, which turns out to be a transformation model in the literature. To characterize these models, we propose to use first‐hitting‐time models for the baseline distribution, specifically inverse Gaussian, Birnbaum–Saunders and gamma distributions, among others. The proposed models have many desirable features, such as a wide variety of shapes of hazard rates, analytical tractability and, most of all, its motivation from a plausible stochastic setting for failure. We estimate the model parameters by the non‐parametric maximum likelihood approach. The estimators are shown to be consistent, asymptotically normal and asymptotically efficient. Simple and stable numerical algorithms are provided to calculate the parameter estimators and estimate their variances. Simulation studies show that the proposed approach is appropriate for practical use. The methodology is illustrated with two real examples. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-10-10T03:52:06.108229-05:
      DOI: 10.1002/sta4.31
       
  • A permutation test to identify important attributes for linking crimes of
           serial offenders
    • Authors: Amanda S. Hering; Karen Kazor
      Pages: n/a - n/a
      Abstract: The modus operandi (MO) of a crime describes the unique characteristics that an offender imparts to it. Although in some instances, a serial offender's behavior is circumstantial, some MO behaviors may be consistent from one crime to the next. By investigating these behaviors, similar crimes can be linked to the same individual, but some attributes describing a crime may be more important for linking than others. Two strategies have historically been used to link crimes. One relies on expert criminal judgment, which may require manually sifting through thousands of crime records. In the second, similar attributes are grouped, and logistic regression is applied to coarse frequency summaries of those groupings. In this work, we introduce an intuitive statistical permutation test for assessing the importance of individual attributes in linking crimes, and we show how the results can be used to weight each one's importance to link crimes. By using the serial offenses of four sets of residential burglaries and six sets of robberies identified in Tempe, Arizona, the test is illustrated, and differences among attribute importance of these two types of crimes are highlighted. We demonstrate greater success in linking crimes when the test results are incorporated into a linking analysis. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-09-20T22:45:24.730649-05:
      DOI: 10.1002/sta4.30
       
  • Improved nonparametric inference for multiple correlated periodic
           sequences
    • Authors: Ying Sun; Jeffrey D. Hart, Marc G. Genton
      Pages: n/a - n/a
      Abstract: This paper proposes a cross‐validation method for estimating the period as well as the values of multiple correlated periodic sequences when data are observed at evenly spaced time points. The period of interest is estimated conditional on the other correlated sequences. An alternative method for period estimation based on Akaike's information criterion is also discussed. The improvement of the period estimation performance is investigated both theoretically and by simulation. We apply the multivariate cross‐validation method to the temperature data obtained from multiple ice cores, investigating the periodicity of the El Niño effect. Our methodology is also illustrated by estimating patients’ cardiac cycle from different physiological signals, including arterial blood pressure, electrocardiography, and fingertip plethysmograph. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-08-26T21:47:02.791625-05:
      DOI: 10.1002/sta4.28
       
  • Distribution‐free confidence intervals for the standardized median
    • Authors: Robert G. Staudte
      Pages: n/a - n/a
      Abstract: Assuming that data can be modeled by an unknown location‐scale family of continuous distributions, the aim is to robustly estimate an effect size defined as the median divided by an interquantile range, where the symmetric quantiles are fixed and to be chosen. It is shown that the sample version of this effect size can be variance stabilized, assuming only that the location family density is continuous and positive at the median and the quantiles defining the interquantile range. Confidence intervals for this effect size, which do not require knowledge of the underlying population, are derived and assessed for coverage. These methods are highly resistant to outliers and simple to implement on freely available software. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-08-23T23:27:53.754185-05:
      DOI: 10.1002/sta4.29
       
  • A location‐scale model for non‐crossing expectile curves
    • Authors: Sabine K. Schnabel; Paul H.C. Eilers
      Pages: n/a - n/a
      Abstract: In quantile smoothing, crossing of the estimated curves is a common nuisance, in particular with small data sets and dense sets of quantiles. Similar problems arise in expectile smoothing. We propose a novel method to avoid crossings. It is based on a location‐scale model for expectiles and estimates all expectile curves simultaneously in a bundle using iterative least asymmetrically weighted squares. In addition, we show how to estimate a density non‐parametrically from a set of expectiles. The model is applied to two data sets. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-08-14T23:51:03.301756-05:
      DOI: 10.1002/sta4.27
       
  • Gaussian process modeling for engineered surfaces with applications to Si
           wafer production
    • Authors: Matthew Plumlee; Ran Jin, V. Roshan Joseph, Jianjun Shi
      Pages: n/a - n/a
      Abstract: When producing engineered surfaces, the stochastic portion of the processing greatly affects the overall output quality. We propose a Gaussian process model that accounts for the impact of control variables on the stochastic elements of the produced surfaces. An optimization algorithm is outlined to find the maximum likelihood estimates of the model parameters. A case study involving the thickness surfaces of semiconductor wafers is examined that demonstrates the need for the proposed approach. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-08-12T01:14:42.714481-05:
      DOI: 10.1002/sta4.26
       
  • Calibration diagnostics for point process models via the probability
           integral transform
    • Authors: Thordis L. Thorarinsdottir
      Pages: n/a - n/a
      Abstract: We propose the use of the probability integral transform (PIT) for model validation in point process models. The simple PIT diagnostic tools assess the calibration of the model and can detect inconsistencies in both the intensity and the interaction structure. For the Poisson model, the PIT diagnostics can be calculated explicitly. Generally, the calibration may be assessed empirically based on random draws from the model, and the method applies to processes of any dimension. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-07-21T21:07:06.956382-05:
      DOI: 10.1002/sta4.25
       
  • Practical marginalized multilevel models
    • Authors: Michael E. Griswold; Bruce J. Swihart, Brian S. Caffo, Scott L. Zeger
      Pages: n/a - n/a
      Abstract: Clustered data analysis is characterized by the need to describe both systematic variation in a mean model and cluster‐dependent random variation in an association model. Marginalized multilevel models embrace the robustness and interpretations of a marginal mean model, while retaining the likelihood inference capabilities and flexible dependence structures of a conditional association model. Although there has been increasing recognition of the attractiveness of marginalized multilevel models, there has been a gap in their practical application arising from a lack of readily available estimation procedures. We extend the marginalized multilevel model to allow for nonlinear functions in both the mean and association aspects. We then formulate marginal models through conditional specifications to facilitate estimation with mixed model computational solutions already in place. We illustrate the MMM and approximate MMM approaches on a cerebrovascular deficiency crossover trial using SAS and an epidemiological study on race and visual impairment using R. Datasets, SAS and R code are included as supplemental materials. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-06-17T21:17:17.28608-05:0
      DOI: 10.1002/sta4.22
       
  • A direct sampler for G‐Wishart variates
    • Authors: Alex Lenkoski
      Pages: n/a - n/a
      Abstract: The G‐Wishart distribution is the conjugate prior for precision matrices that encode the conditional independence of a Gaussian graphical model. Although the distribution has received considerable attention, posterior inference has proven computationally challenging, in part owing to the lack of a direct sampler. In this note, we rectify this situation. The existence of a direct sampler offers a host of new possibilities for the use of G‐Wishart variates. We discuss one such development by outlining a new transdimensional model search algorithm—which we term double reversible jump—that leverages this sampler to avoid normalizing constant calculation when comparing graphical models. We conclude with two short studies meant to investigate our algorithm's validity. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-06-03T22:08:47.56773-05:0
      DOI: 10.1002/sta4.23
       
  • Simultaneous model selection and estimation for mean and association
           structures with clustered binary data
    • Authors: Xin Gao; Grace Y. Yi
      Pages: n/a - n/a
      Abstract: This paper investigates the property of the penalized estimating equations when both the mean and association structures are modelled. To select variables for the mean and association structures sequentially, we propose a hierarchical penalized generalized estimating equations (HPGEE2) approach. The first set of penalized estimating equations is solved for the selection of significant mean parameters. Conditional on the selected mean model, the second set of penalized estimating equations is solved for the selection of significant association parameters. The hierarchical approach is designed to accommodate possible model constraints relating the inclusion of covariates into the mean and the association models. This two‐step penalization strategy enjoys a compelling advantage of easing computational burdens compared to solving the two sets of penalized equations simultaneously. HPGEE2 with a smoothly clipped absolute deviation (SCAD) penalty is shown to have the oracle property for the mean and association models. The asymptotic behavior of the penalized estimator under this hierarchical approach is established. An efficient two‐stage penalized weighted least square algorithm is developed to implement the proposed method. The empirical performance of the proposed HPGEE2 is demonstrated through Monte‐Carlo studies and the analysis of a clinical data set.Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-05-24T00:46:06.419804-05:
      DOI: 10.1002/sta4.21
       
  • Variable selection in generalized functional linear models
    • Authors: Jan Gertheiss; Arnab Maity, Ana‐Maria Staicu
      Pages: n/a - n/a
      Abstract: Modern research data, where a large number of functional predictors is collected on few subjects are becoming increasingly common. In this paper we propose a variable selection technique, when the predictors are functional and the response is scalar. Our approach is based on adopting a generalized functional linear model framework and using a penalized likelihood method that simultaneously controls the sparsity of the model and the smoothness of the corresponding coefficient functions by adequate penalization. The methodology is characterized by high predictive accuracy, and yields interpretable models, while retaining computational efficiency. The proposed method is investigated numerically in finite samples, and applied to a diffusion tensor imaging tractography data set and a chemometric data set. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-05-09T00:52:25.956569-05:
      DOI: 10.1002/sta4.20
       
  • Combined analysis of correlated data when data cannot be pooled
    • Authors: Elinor M. Jones; Nuala A. Sheehan, Amadou Gaye, Philippe Laflamme, Paul Burton
      Pages: n/a - n/a
      Abstract: In genetic epidemiology studies, associations between individual genetic variants and phenotypes of interest are generally weak requiring large samples to estimate effects and to address complex statistical questions. Such sample sizes are often only achievable by pooling data from multiple studies; effects of interest can then be investigated through an individual‐level meta‐analysis (ILMA) on the pooled data, or by conducting a conventional study‐level meta‐analysis (SLMA). However, pooling individual‐level research data for an ILMA is not always possible, and researchers may be compelled to conduct an SLMA instead, restricting the sharing to non‐disclosing summary statistics. In certain settings, an individual‐level analysis can be conducted without pooling the data from the different studies. It has already been shown that when data are horizontally partitioned between studies, i.e. data are collected on the same variables in each study but any given study participant appears in one study only, it is possible to fit a generalised linear model in this way. In the present paper, we demonstrate that an individual‐level generalised estimating equations meta‐analysis can be achieved in an analogous manner. This extends the scope of ILMA without data pooling to problems involving correlated and clustered responses. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-04-28T21:59:16.644582-05:
      DOI: 10.1002/sta4.19
       
  • A point process model for tornado report climatology
    • Authors: Dmitriy Karpman; Marco A.R. Ferreira, Christopher K. Wikle
      Pages: 1 - 8
      Abstract: We propose a point process model with multiplicative risk for the study of tornado reports in the United States. In particular, we implement a rigorous statistical procedure to evaluate whether tornado report counts are significantly related to topographic variability. The model we propose also includes flexible nonparametric components for spatial and seasonality effects. We apply the proposed model and methodology to the analysis of tornado report data from 1953 to 2010 in the United States. Our analysis shows that in addition to the spatial and seasonal effects, the topographic variability is an important component of tornado risk. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-01-14T04:01:25.742853-05:
      DOI: 10.1002/sta4.14
       
  • Aligning some Nicholson sheep‐blowfly data sets with system input
           periods
    • Authors: David R. Brillinger
      Pages: 9 - 21
      Abstract: During the 1950s the Australian entomologist Alexander Nicholson studied a sheep pest, lucilia cuprina, (L cuprina), the sheep‐blowfly. In laboratory experiments blowfly populations were set up in cages. They were supplied with necessary food and water and every other day counts were made of the numbers in their various stages of development. The experiments went on for over a year. Various statistical studies have been carried out on their data. Sadly, the bulk of the data appears to be lost. Recently this author made the discovery of total population counts for ten Nicholson experiments. These data were in a collection of copies of index cards he made during a trip to Australia in 1977. In eight of the experiments the input food was varied cyclically in sawtooth fashion, each experiment having a different period of application. However, and what is the concern of this article, which data set went with which period of application remains unclear. In the present study use is made of periodograms, spectrograms and seasonal adjustment to seek a one‐to‐one correspondence between series and period. The estimate constructed is consistent under smoothing and limiting conditions. It is time domain based, but confirmed by periodogram and spectrogram computation. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-02-04T08:17:32.303456-05:
      DOI: 10.1002/sta4.13
       
  • On the assessment of multivariate and multisite measurement systems
    • Authors: Michele Scagliarini
      Pages: 22 - 33
      Abstract: The multivariate Measurement Systems Analysis (MSA) approval criteria proposed in literature are specifically designed for assessing measurement systems made up of a single instrument. Therefore, they may encounter difficulties in assessing multisite measurement systems where there are multiple instruments in parallel. In this work, we propose a method for assessing such complex measurement systems. Since a key assumption in multisite measurement systems is that all instruments are expected to have the same level of precision, we base our proposal on the comparison of the precisions of multivariate instruments by means of a statistical test. A simulation study is performed in order to evaluate the performances of the proposed method. The results show that the illustrated approach is effective for assessing complex measurement systems and can be useful for reducing the costs for performing multivariate MSA. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-01-31T07:31:59.809187-05:
      DOI: 10.1002/sta4.16
       
  • Approximate Bayesian computation via regression density estimation
    • Authors: Yanan Fan; David J. Nott, Scott A. Sisson
      Pages: 34 - 48
      Abstract: Approximate Bayesian computation (ABC) methods, which are applicable when the likelihood is difficult or impossible to calculate, are an active topic of current research. Most current ABC algorithms directly approximate the posterior distribution, but an alternative, less common strategy is to approximate the likelihood function. This has several advantages. First, in some problems, it is easier to approximate the likelihood than to approximate the posterior. Second, an approximation to the likelihood allows reference analyses to be constructed based solely on the likelihood. Third, it is straightforward to perform sensitivity analyses for several different choices of prior once an approximation to the likelihood is constructed, which needs to be done only once. The contribution of the present paper is to consider regression density estimation techniques to approximate the likelihood in the ABC setting. Our likelihood approximations build on recently developed marginal adaptation density estimators by extending them for conditional density estimation. Our approach facilitates reference Bayesian inference, as well as frequentist inference. The method is demonstrated via a challenging problem of inference for stereological extremes, where we perform both frequentist and Bayesian inference. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-02-05T01:02:37.576056-05:
      DOI: 10.1002/sta4.15
       
  • Metrics for SiZer map comparison
    • Authors: Jan Hannig; Thomas C.M. Lee, Cheolwoo Park
      Pages: 49 - 60
      Abstract: SiZer is a powerful visualization tool for uncovering real structures masked in noisy data. It produces a two‐dimensional plot, the so‐called SiZer map, to help the data analyst to carry out this task. Since its first proposal, many different extensions and improvements have been developed, including robust SiZer, quantile SiZer, and various SiZers for time series data, just to name a few. Given these many SiZer variants, one important question is, how can one evaluate the quality of a SiZer map produced by any one of these variants' The primary goal of this article aims to answer this question by proposing two metrics for quantifying the discrepancy between any two SiZer maps. With such metrics, one can systematically calculate the distance between a “true” SiZer map and a SiZer map produced by any one of the SiZer variants. Consequently, one can select a “best” SiZer variant for the problem at hand by selecting the variant that produces SiZer maps that are, on average, closest to the “true” SiZer map. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-02-10T21:38:35.408827-05:
      DOI: 10.1002/sta4.17
       
  • Variational inference for marginal longitudinal semiparametric regression
    • Authors: Marianne Menictas; Matt P. Wand
      Pages: 61 - 71
      Abstract: We derive a variational inference procedure for approximate Bayesian inference in marginal longitudinal semiparametric regression. Fitting and inference is much faster than existing Markov chain Monte Carlo approaches. Numerical studies indicate that the new methodology is very accurate for the class of models under consideration. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-02-11T02:19:06.447722-05:
      DOI: 10.1002/sta4.18
       
  • A simple multiply robust estimator for missing response problem
    • Authors: Kwun Chuen Gary Chan
      Pages: 143 - 149
      Abstract: A multiply robust estimator for a missing response problem is recently proposed that is more robust than doubly robust estimators proposed in the literature. Its formulation is based on empirical likelihood, which solves an implicit Lagrangian equation and often encounters computational problems such as multiple roots or nonconvergence. An alternative multiply robust estimator is proposed, which is computed by least squares and can be implemented easily in practice. We show that this multiply robust estimator is locally semiparametric efficient.Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-07-08T04:58:20.116249-05:
      DOI: 10.1002/sta4.24
       
  • Modeling of the learning process in centipede games
    • Authors: Anton H. Westveld; Peter D. Hoff
      Pages: 242 - 254
      Abstract: According to classic game theory, individuals playing a centipede game learn about the subgame perfect Nash equilibrium via repeated play of the game. We employ statistical modeling to evaluate the evidence of such learning processes while accounting for the substantial within‐player correlation observed for the players’ decisions and rates of learning. We determine the probabilities of players’ choices through a quantal response equilibrium. Our statistical approach additionally (i) relaxes the assumption of players’ a priori global knowledge of opponents’ strategies, (ii) incorporates within‐subject dependency through random effects, and (iii) allows players’ decision probabilities to change with repeated play through an explicit covariate. Hence, players’ tendencies to correctly assess the utility of decisions are allowed to evolve over the course of the game, and both adaptive behavior as one accrues experience and the difference in this behavior between players are appropriately reflected by the model. Copyright © 2013 John Wiley & Sons, Ltd.
      PubDate: 2013-11-11T02:59:11.994411-05:
      DOI: 10.1002/sta4.32
       
  • Wiley‐Blackwell Announces Launch of Stat – The ISI's Journal
           for the Rapid Dissemination of Statistics Research
    • Pages: n/a - n/a
      PubDate: 2012-04-17T04:34:14.600281-05:
      DOI: 10.1002/sta4.1
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2014