for Journals by Title or ISSN for Articles by Keywords help

Publisher: Springer-Verlag (Total: 2355 journals)

 AStA Advances in Statistical Analysis   [SJR: 0.681]   [H-I: 15]   [2 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1863-818X - ISSN (Online) 1863-8171    Published by Springer-Verlag  [2355 journals]
• Guest editors’ introduction to the special issue on
“Ecological Statistics”
• Authors: Roland Langrock; David L. Borchers
Pages: 345 - 347
PubDate: 2017-10-01
DOI: 10.1007/s10182-017-0307-2
Issue No: Vol. 101, No. 4 (2017)

• Some applications of genetics in statistical ecology
• Authors: R. M. Fewster
Pages: 349 - 379
Abstract: Abstract Genetic data are in widespread use in ecological research, and an understanding of this type of data and its uses and interpretations will soon be an imperative for ecological statisticians. Here, we provide an introduction to the subject, intended for statisticians who have no previous knowledge of genetics. Although there are numerous types of genetic data, we restrict attention to multilocus genotype data from microsatellite loci. We look at two application areas in wide use: investigating population structure using genetic assignment and related techniques; and using genotype data in capture–recapture studies for estimating population size and demographic parameters. In each case, we outline the conceptual framework and draw attention to both the strengths and weaknesses of existing approaches to analysis and interpretation.
PubDate: 2017-10-01
DOI: 10.1007/s10182-016-0273-0
Issue No: Vol. 101, No. 4 (2017)

• Species occupancy estimation and imperfect detection: shall surveys
continue after the first detection'
• Authors: Gurutzeta Guillera-Arroita; José J. Lahoz-Monfort
Pages: 381 - 398
Abstract: Abstract Species occupancy, the proportion of sites occupied by a species, is a state variable of interest in ecology. One challenge in its estimation is that detection is often imperfect in wildlife surveys. As a consequence, occupancy models that explicitly describe the observation process are becoming widely used in the discipline. These models require data that are informative about species detectability. Such information is often obtained by conducting repeat surveys to sampling sites. One strategy is to survey each site a predefined number of times, regardless of whether the species is detected. Alternatively, one can stop surveying a site once the species is detected and reallocate the effort saved to surveying new sites. In this paper we evaluate the merits of these two general design strategies under a range of realistic conditions. We conclude that continuing surveys after detection is beneficial unless the cumulative probability of detection at occupied sites is close to one, and that the benefits are greater when the sample size is small. Since detectability and sample size tend to be small in ecological applications, our recommendation is to follow a strategy where at least some of the sites continue to be sampled after first detection.
PubDate: 2017-10-01
DOI: 10.1007/s10182-017-0292-5
Issue No: Vol. 101, No. 4 (2017)

• From distance sampling to spatial capture–recapture
• Authors: David L. Borchers; Tiago A. Marques
Pages: 475 - 494
Abstract: Abstract Distance sampling and capture–recapture are the two most widely used wildlife abundance estimation methods. capture–recapture methods have only recently incorporated models for spatial distribution and there is an increasing tendency for distance sampling methods to incorporated spatial models rather than to rely on partly design-based spatial inference. In this overview we show how spatial models are central to modern distance sampling and that spatial capture–recapture models arise as an extension of distance sampling methods. Depending on the type of data recorded, they can be viewed as particular kinds of hierarchical binary regression, Poisson regression, survival or time-to-event models, with individuals’ locations as latent variables and a spatial model as the latent variable distribution. Incorporation of spatial models in these two methods provides new opportunities for drawing explicitly spatial inferences. Areas of likely future development include more sophisticated spatial and spatio-temporal modelling of individuals’ locations and movements, new methods for integrating spatial capture–recapture and other kinds of ecological survey data, and methods for dealing with the recapture uncertainty that often arise when “capture” consists of detection by a remote device like a camera trap or microphone.
PubDate: 2017-10-01
DOI: 10.1007/s10182-016-0287-7
Issue No: Vol. 101, No. 4 (2017)

• A test for the global minimum variance portfolio for small sample and
singular covariance
• Authors: Taras Bodnar; Stepan Mazur; Krzysztof Podgórski
Pages: 253 - 265
Abstract: Abstract Recently, a test dealing with the linear hypothesis for the global minimum variance portfolio weights was obtained under the assumption of non-singular covariance matrix. However, the problem of potential multicollinearity and correlations of assets constitutes a limitation of the classical portfolio theory. Therefore, there is an interest in developing theory in the presence of singularities in the covariance matrix. In this paper, we extend the test by analyzing the portfolio weights in the small sample case with a singular population covariance matrix. The results are illustrated using actual stock returns and a discussion of practical relevance of the model is presented.
PubDate: 2017-07-01
DOI: 10.1007/s10182-016-0282-z
Issue No: Vol. 101, No. 3 (2017)

• Test for model selection using Cramér–von Mises distance in a fixed
design regression setting
• Authors: Hong Chen; Maik Döring; Uwe Jensen
Abstract: Abstract In this paper a test for model selection is proposed which extends the usual goodness-of-fit test in several ways. It is assumed that the underlying distribution H depends on a covariate value in a fixed design setting. Secondly, instead of one parametric class we consider two competing classes one of which may contain the underlying distribution. The test allows to select one of two equally treated model classes which fits the underlying distribution better. To define the distance of distributions various measures are available. Here the Cramér-von Mises has been chosen. The null hypothesis that both parametric classes have the same distance to the underlying distribution H can be checked by means of a test statistic, the asymptotic properties of which are shown under a set of suitable conditions. The performance of the test is demonstrated by Monte Carlo simulations. Finally, the procedure is applied to a data set from an endurance test on electric motors.
PubDate: 2017-12-29
DOI: 10.1007/s10182-017-0317-0

• Estimating the hazard functions of two alternating recurrent events in the
presence of covariates
• Authors: Moumita Chatterjee; Sugata Sen Roy
Abstract: Abstract The motivation for this paper is a cystic fibrosis data which records a patient’s times to relapse and times to cure under several recurrences of the disease. The idea is to study the impact of covariates on the hazard rates of two alternately occurring events. The dependence between the times to the two events over the different cycles is modeled through an autoregressive-type setup. The partial likelihood function is then derived and the estimators obtained. The estimators are shown to be consistent and asymptotically normal. The technique is applied to study the motivating data. A simulation study is also conducted to corroborate the results.
PubDate: 2017-12-21
DOI: 10.1007/s10182-017-0316-1

• Advances in estimation by the item sum technique using auxiliary
information in complex surveys
• Authors: María del Mar García Rueda; Pier Francesco Perri; Beatriz Rodríguez Cobo
Abstract: Abstract To collect sensitive data, survey statisticians have designed many strategies to reduce nonresponse rates and social desirability response bias. In recent years, the item count technique has gained considerable popularity and credibility as an alternative mode of indirect questioning survey, and several variants of this technique have been proposed as new needs and challenges arise. The item sum technique (IST), which was introduced by Chaudhuri and Christofides (Indirect questioning in sample surveys, Springer-Verlag, Berlin, 2013) and Trappmann et al. (J Surv Stat Methodol 2:58–77, 2014), is one such variant, used to estimate the mean of a sensitive quantitative variable. In this approach, sampled units are asked to respond to a two-list of items containing a sensitive question related to the study variable and various innocuous, nonsensitive, questions. To the best of our knowledge, very few theoretical and applied papers have addressed the IST. In this article, therefore, we present certain methodological advances as a contribution to appraising the use of the IST in real-world surveys. In particular, we employ a generic sampling design to examine the problem of how to improve the estimates of the sensitive mean when auxiliary information on the population under study is available and is used at the design and estimation stages. A Horvitz–Thompson-type estimator and a calibration-type estimator are proposed and their efficiency is evaluated by means of an extensive simulation study. Using simulation experiments, we show that estimates obtained by the IST are nearly equivalent to those obtained using “true data” and that in general they outperform the estimates provided by a competitive randomized response method. Moreover, variance estimation may be considered satisfactory. These results open up new perspectives for academics, researchers and survey practitioners and could justify the use of the IST as a valid alternative to traditional direct questioning survey modes.
PubDate: 2017-12-09
DOI: 10.1007/s10182-017-0315-2

• The estimations under power normalization for the tail index, with
comparison
• Authors: H. M. Barakat; E. M. Nigm; O. M. Khaled; H. A. Alaswed
Abstract: Abstract It is well known that the max-stable laws under power normalization attract more distributions than that under linear normalization. This fact practically means that the classical linear model (L-model) may fail to fit the given extreme data, while the power model (P-model) succeeds to do that. The main object of this paper is developing the modeling of extreme values via P-model by suggesting a simple technique to obtain a parallel estimator of the extreme value index (EVI) in the P-model for every known estimator to the corresponding parameter in L-mode. An application of this technique yields two classes of moment and moment ratio estimators for EVI in the P-model. The performances of these estimators are assessed via a simulation study. Moreover, an efficient criterion for comparing the L and P models is proposed to choose the best model when the two models successfully work.
PubDate: 2017-12-05
DOI: 10.1007/s10182-017-0314-3

• An exact method for the multiple comparison of several polynomial
regression models with applications in dose-response study
• Authors: Sanyu Zhou
Abstract: Abstract Research on the multiple comparison during the past 60 years or so has focused mainly on the comparison of several population means. Spurrier (J Am Stat Assoc 94:483–488, 1999) and Liu et al. (J Am Stat Assoc 99:395–403, 2004) considered the multiple comparison of several linear regression lines. They assumed that there was no functional relationship between the predictor variables. For the case of the polynomial regression model, the functional relationship between the predictor variables does exist. This lack of a full utilization of the functional relationship between the predictor variables may have some undesirable consequences. In this article we introduce an exact method for the multiple comparison of several polynomial regression models. This method sufficiently takes advantage of the feature of the polynomial regression model, and therefore, it can quickly and accurately compute the critical constant. This proposed method allows various types of comparisons, including pairwise, many-to-one and successive, and it also allows the predictor variable to be either unconstrained or constrained to a finite interval. The examples from the dose-response study are used to illustrate the method. MATLAB programs have been written for easy implementation of this method.
PubDate: 2017-11-30
DOI: 10.1007/s10182-017-0313-4

• Optimal designs for treatment comparisons represented by graphs
• Authors: Samuel Rosa
Abstract: Abstract Consider an experiment for comparing a set of treatments: in each trial, one treatment is chosen and its effect determines the mean response of the trial. We examine the optimal approximate designs for the estimation of a system of treatment contrasts under this model. These designs can be used to provide optimal treatment proportions in more general models with nuisance effects. For any system of pairwise treatment comparisons, we propose to represent such a system by a graph. Then, we represent the designs by the inverses of the vertex weights in the corresponding graph and we show that the values of the eigenvalue-based optimality criteria can be expressed using the Laplacians of the vertex-weighted graphs. We provide a graph theoretic interpretation of D-, A- and E-optimality for estimating sets of pairwise comparisons. We apply the obtained graph representation to provide optimality results for these criteria as well as for ’symmetric’ systems of treatment contrasts.
PubDate: 2017-11-11
DOI: 10.1007/s10182-017-0312-5

• Minimum phi-divergence estimators for multinomial logistic regression with
complex sample design
• Authors: Elena Castilla; Nirian Martín; Leandro Pardo
Abstract: Abstract This article develops the theoretical framework needed to study the multinomial regression model for complex sample design with pseudo-minimum phi-divergence estimators. The numerical example and the simulation study propose new estimators for the parameter of the logistic regression with overdispersed multinomial distributions for the response variables, the pseudo-minimum Cressie–Read divergence estimators, as well as new estimators for the intra-cluster correlation coefficient. The simulation study shows that the Binder’s method for the intra-cluster correlation coefficient exhibits an excellent performance when the pseudo-minimum Cressie–Read divergence estimator, with $$\lambda =\frac{2}{3}$$ , is plugged.
PubDate: 2017-10-28
DOI: 10.1007/s10182-017-0311-6

• A penalized likelihood method for nonseparable space–time
• Authors: Ali M. Mosammam; Jorge Mateu
Abstract: Abstract In this paper, we study space–time generalized additive models. We apply the penalyzed likelihood method to fit generalized additive models (GAMs) for nonseparable spatio-temporal correlated data in order to improve the estimation of the response and smooth terms of GAMs. The results show that our space–time generalized additive models estimated response and smooth terms reasonable well, and in addition, the mean squared error, mean absolute deviation and coverage intervals improved considerably compared to the classic GAM. An application on particulate matter concentration in the North-Italian region of Piemonte is also presented.
PubDate: 2017-10-06
DOI: 10.1007/s10182-017-0309-0

• Uniqueness of characterization of absolutely continuous distributions by
regressions of generalized order statistics
• Authors: Mariusz Bieniek; Krystyna Macia̧g
Abstract: Abstract We provide a new approach to the problem of the unique identification of distributions with a continuous density by a single regression function of order statistics or record values or, more generally, generalized order statistics. Using their Markov property we show that the characterization is unique if and only if the corresponding system of differential equations has the unique solution. This result is new even in the particular case of ordinary order statistics. This approach provides a new proof of characterization of power, exponential and Pareto distributions by linearity of corresponding regression. It also yields new examples of characterizations of distributions.
PubDate: 2017-10-06
DOI: 10.1007/s10182-017-0310-7

• Measuring temporal trends in biodiversity
• Authors: S. T. Buckland; Y. Yuan; E. Marcon
Abstract: Abstract In 2002, nearly 200 nations signed up to the 2010 target of the Convention for Biological Diversity, ‘to significantly reduce the rate of biodiversity loss by 2010’. To assess whether the target was met, it became necessary to quantify temporal trends in measures of diversity. This resulted in a marked shift in focus for biodiversity measurement. We explore the developments in measuring biodiversity that was prompted by the 2010 target. We consider measures based on species proportions, and also explain why a geometric mean of relative abundance estimates was preferred to such measures for assessing progress towards the target. We look at the use of diversity profiles, and consider how species similarity can be incorporated into diversity measures. We also discuss measures of turnover that can be used to quantify shifts in community composition arising, for example, from climate change.
PubDate: 2017-08-12
DOI: 10.1007/s10182-017-0308-1

• Variance estimation for integrated population models
• Authors: Panagiotis Besbeas; Byron J. T. Morgan
Abstract: Abstract State-space models are widely used in ecology. However, it is well known that in practice it can be difficult to estimate both the process and observation variances that occur in such models. We consider this issue for integrated population models, which incorporate state-space models for population dynamics. To some extent, the mechanism of integrated population models protects against this problem, but it can still arise, and two illustrations are provided, in each of which the observation variance is estimated as zero. In the context of an extended case study involving data on British Grey herons, we consider alternative approaches for dealing with the problem when it occurs. In particular, we consider penalised likelihood, a method based on fitting splines and a method of pseudo-replication, which is undertaken via a simple bootstrap procedure. For the case study of the paper, it is shown that when it occurs, an estimate of zero observation variance is unimportant for inference relating to the model parameters of primary interest. This unexpected finding is supported by a simulation study.
PubDate: 2017-08-07
DOI: 10.1007/s10182-017-0304-5

• First-order random coefficients integer-valued threshold autoregressive
processes
• Authors: Han Li; Kai Yang; Shishun Zhao; Dehui Wang
Abstract: Abstract In this paper, we introduce a first-order random coefficient integer-valued threshold autoregressive process, which is based on binomial thinning. Basic probabilistic and statistical properties of this model are discussed. Conditional least squares and conditional maximum likelihood estimators are derived for both the cases that the threshold variable is known or not. The asymptotic properties of the estimators are established. Moreover, forecasting problem is addressed. Finally, some numerical results of the estimates and a real data example are presented.
PubDate: 2017-07-29
DOI: 10.1007/s10182-017-0306-3

• A distance-based model for spatial prediction using radial basis functions
• Authors: Carlos E. Melo; Oscar O. Melo; Jorge Mateu
Abstract: Abstract In the context of local interpolators, radial basis functions (RBFs) are known to reduce the computational time by using a subset of the data for prediction purposes. In this paper, we propose a new distance-based spatial RBFs method which allows modeling spatial continuous random variables. The trend is incorporated into a RBF according to a detrending procedure with mixed variables, among which we may have categorical variables. In order to evaluate the efficiency of the proposed method, a simulation study is carried out for a variety of practical scenarios for five distinct RBFs, incorporating principal coordinates. Finally, the proposed method is illustrated with an application of prediction of calcium concentration measured at a depth of 0–20 cm in Brazil, selecting the smoothing parameter by cross-validation.
PubDate: 2017-07-26
DOI: 10.1007/s10182-017-0305-4

• Improving the usability of spatial point process methodology: an
interdisciplinary dialogue between statistics and ecology
• Authors: Janine B. Illian; David F. R. P. Burslem
Abstract: Abstract The last few decades have seen an increasing interest and strong development in spatial point process methodology, and associated software that facilitates model fitting has become available. A lot of this progress has made these approaches more accessible to users, through freely available software. However, in the ecological user community the methodology has only been slowly picked up despite its obvious relevance to the field. This paper reflects on this development, highlighting mutual benefits of interdisciplinary dialogue for both statistics and ecology. We detail the contribution point process methodology has made to research on biodiversity theory as a result of this dialogue and reflect on reasons for the slow take-up of the methodology. This primarily concerns the current lack of consideration of the usability of the approaches, which we discuss in detail, presenting current discussions as well as indicating future directions.
PubDate: 2017-07-14
DOI: 10.1007/s10182-017-0301-8

• Statistical modelling of individual animal movement: an overview of key
methods and a discussion of practical challenges
• Authors: Toby A. Patterson; Alison Parton; Roland Langrock; Paul G. Blackwell; Len Thomas; Ruth King
Abstract: Abstract With the influx of complex and detailed tracking data gathered from electronic tracking devices, the analysis of animal movement data has recently emerged as a cottage industry among biostatisticians. New approaches of ever greater complexity are continue to be added to the literature. In this paper, we review what we believe to be some of the most popular and most useful classes of statistical models used to analyse individual animal movement data. Specifically, we consider discrete-time hidden Markov models, more general state-space models and diffusion processes. We argue that these models should be core components in the toolbox for quantitative researchers working on stochastic modelling of individual animal movement. The paper concludes by offering some general observations on the direction of statistical analysis of animal movement. There is a trend in movement ecology towards what are arguably overly complex modelling approaches which are inaccessible to ecologists, unwieldy with large data sets or not based on mainstream statistical practice. Additionally, some analysis methods developed within the ecological community ignore fundamental properties of movement data, potentially leading to misleading conclusions about animal movement. Corresponding approaches, e.g. based on Lévy walk-type models, continue to be popular despite having been largely discredited. We contend that there is a need for an appropriate balance between the extremes of either being overly complex or being overly simplistic, whereby the discipline relies on models of intermediate complexity that are usable by general ecologists, but grounded in well-developed statistical practice and efficient to fit to large data sets.
PubDate: 2017-07-04
DOI: 10.1007/s10182-017-0302-7

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327