Authors:N. Balakrishnan; M. V. Koutras; F. S. Milienos Pages: 717 - 735 Abstract: Abstract In start-up demonstration testing, the performance of the unit on successive start-ups is taken into account and several different types of decision criteria (most of them are inspired by the theory of runs and scans) for accepting or rejecting the unit have been introduced. Although the use of a start-up demonstration test assumes the existence of units of lower quality, when the estimation of the respective probability comes up, there is still much work to be done. Therefore, in this paper, we study binary start-up demonstration tests, assuming that we have at hand two different types of units with potentially different probabilities of successful start-up. In this case, the waiting time distributions are expressed as two-component mixture models and their identifiability is discussed. Finally, an estimation method based on the EM algorithm for the model parameters is described and some numerical examples are presented to illustrate the methods developed here. PubDate: 2017-08-01 DOI: 10.1007/s10463-016-0569-6 Issue No:Vol. 69, No. 4 (2017)

Authors:Darinka Dentcheva; Spiridon Penev; Andrzej Ruszczyński Pages: 737 - 760 Abstract: Abstract We address the statistical estimation of composite functionals which may be nonlinear in the probability measure. Our study is motivated by the need to estimate coherent measures of risk, which become increasingly popular in finance, insurance, and other areas associated with optimization under uncertainty and risk. We establish central limit theorems for composite risk functionals. Furthermore, we discuss the asymptotic behavior of optimization problems whose objectives are composite risk functionals and we establish a central limit formula of their optimal values when an estimator of the risk functional is used. While the mathematical structures accommodate commonly used coherent measures of risk, they have more general character, which may be of independent interest. PubDate: 2017-08-01 DOI: 10.1007/s10463-016-0559-8 Issue No:Vol. 69, No. 4 (2017)

Authors:Weihua Zhao; Riquan Zhang; Yazhao Lv; Jicai Liu Pages: 761 - 789 Abstract: Abstract In this paper, a minimizing average check loss estimation (MACLE) procedure is proposed for the single-index coefficient model (SICM) in the framework of quantile regression (QR). The resulting estimators have the asymptotic normality and achieve the best convergence rate. Furthermore, a variable selection method is investigated for the QRSICM by combining MACLE method with the adaptive LASSO penalty, and we also established the oracle property of the proposed variable selection method. Extensive simulations are conducted to assess the finite sample performance of the proposed estimation and variable selection procedure under various error settings. Finally, we present a real-data application of the proposed approach. PubDate: 2017-08-01 DOI: 10.1007/s10463-016-0558-9 Issue No:Vol. 69, No. 4 (2017)

Authors:Samuel Vaiter; Charles Deledalle; Jalal Fadili; Gabriel Peyré; Charles Dossal Pages: 791 - 832 Abstract: Abstract We study regularized regression problems where the regularizer is a proper, lower-semicontinuous, convex and partly smooth function relative to a Riemannian submanifold. This encompasses several popular examples including the Lasso, the group Lasso, the max and nuclear norms, as well as their composition with linear operators (e.g., total variation or fused Lasso). Our main sensitivity analysis result shows that the predictor moves locally stably along the same active submanifold as the observations undergo small perturbations. This plays a pivotal role in getting a closed-form expression for the divergence of the predictor w.r.t. observations. We also show that, for many regularizers, including polyhedral ones or the analysis group Lasso, this divergence formula holds Lebesgue a.e. When the perturbation is random (with an appropriate continuous distribution), this allows us to derive an unbiased estimator of the degrees of freedom and the prediction risk. Our results unify and go beyond those already known in the literature. PubDate: 2017-08-01 DOI: 10.1007/s10463-016-0563-z Issue No:Vol. 69, No. 4 (2017)

Authors:Koji Tsukuda Pages: 833 - 864 Abstract: Abstract A test procedure based on continuous observation to detect a change in drift parameters of an ergodic diffusion process is proposed. The asymptotic behavior of a random field relating to an estimating equation under the null hypothesis is established using weak convergence theory in separable Hilbert spaces. This result is applied to a change point detection test. PubDate: 2017-08-01 DOI: 10.1007/s10463-016-0564-y Issue No:Vol. 69, No. 4 (2017)

Authors:Diane Donovan; Benjamin Haaland; David J. Nott Pages: 865 - 878 Abstract: Abstract Sliced Sudoku-based space-filling designs and, more generally, quasi-sliced orthogonal array-based space-filling designs are useful experimental designs in several contexts, including computer experiments with categorical in addition to quantitative inputs and cross-validation. Here, we provide a straightforward construction of doubly orthogonal quasi-Sudoku Latin squares which can be used to generate quasi-sliced orthogonal arrays and, in turn, sliced space-filling designs which achieve uniformity in one- and two-dimensional projections for the full design and uniformity in two-dimensional projections for each slice. These constructions are very practical to implement and yield a spectrum of design sizes and numbers of factors not currently broadly available. PubDate: 2017-08-01 DOI: 10.1007/s10463-016-0565-x Issue No:Vol. 69, No. 4 (2017)

Authors:Hông Vân Lê Pages: 879 - 896 Abstract: Abstract We define a mixed topology on the fiber space \(\cup _\mu \oplus ^n L^n(\mu )\) over the space \({\mathcal M}({\Omega })\) of all finite non-negative measures \(\mu \) on a separable metric space \({\Omega }\) provided with Borel \(\sigma \) -algebra. We define a notion of strong continuity of a covariant n-tensor field on \({\mathcal M}({\Omega })\) . Under the assumption of strong continuity of an information metric, we prove the uniqueness of the Fisher metric as information metric on statistical models associated with \({\Omega }\) . Our proof realizes a suggestion due to Amari and Nagaoka to derive the uniqueness of the Fisher metric from the special case proved by Chentsov by using a special kind of limiting procedure. The obtained result extends the monotonicity characterization of the Fisher metric on statistical models associated with finite sample spaces and complement the uniqueness theorem by Ay–Jost–Lê–Schwachhöfer that characterizes the Fisher metric by its invariance under sufficient statistics. PubDate: 2017-08-01 DOI: 10.1007/s10463-016-0562-0 Issue No:Vol. 69, No. 4 (2017)

Authors:Shaogao Lv; Xin He; Junhui Wang Pages: 897 - 923 Abstract: Abstract This paper focuses on the high-dimensional additive quantile model, allowing for both dimension and sparsity to increase with sample size. We propose a new sparsity-smoothness penalty over a reproducing kernel Hilbert space (RKHS), which includes linear function and spline-based nonlinear function as special cases. The combination of sparsity and smoothness is crucial for the asymptotic theory as well as the computational efficiency. Oracle inequalities on excess risk of the proposed method are established under weaker conditions than most existing results. Furthermore, we develop a majorize-minimization forward splitting iterative algorithm (MMFIA) for efficient computation and investigate its numerical convergence properties. Numerical experiments are conducted on the simulated and real data examples, which support the effectiveness of the proposed method. PubDate: 2017-08-01 DOI: 10.1007/s10463-016-0566-9 Issue No:Vol. 69, No. 4 (2017)

Authors:Fabian Mies; Stefan Bedbur Pages: 925 - 944 Abstract: Abstract In parametric statistics, confidence bands for continuous distribution (quantile) functions may be constructed by unifying the graphs of all distribution (quantile) functions corresponding to parameters lying in some confidence region. It is then desirable that the coverage probabilities of both, band and region, coincide, e.g., to prevent from wide and less informative bands or to transfer the property of unbiasedness; this is ensured if the confidence region is exhaustive. Properties and representations of exhaustive confidence regions are presented. In location-scale families, the property of some confidence region to be exhaustive depends on the boundedness of the supports of the distributions in the family. For unbounded, one-sided bounded and bounded supports, characterizations of exhaustive confidence regions are derived. The results are useful to decide whether the trapezoidal confidence regions based on the standard pivotal quantities are exhaustive and may serve to construct exhaustive confidence regions in (log-)location-scale models. PubDate: 2017-08-01 DOI: 10.1007/s10463-016-0570-0 Issue No:Vol. 69, No. 4 (2017)

Authors:Yong Kong Pages: 497 - 512 Abstract: Abstract The distributions of the mth longest runs of multivariate random sequences are considered. For random sequences made up of k kinds of letters, the lengths of the runs are sorted in two ways to give two definitions of run length ordering. In one definition, the lengths of the runs are sorted separately for each letter type. In the second definition, the lengths of all the runs are sorted together. Exact formulas are developed for the distributions of the mth longest runs for both definitions. The derivations are based on a two-step method that is applicable to various other runs-related distributions, such as joint distributions of several letter types and multiple run lengths of a single letter type. PubDate: 2017-06-01 DOI: 10.1007/s10463-015-0551-8 Issue No:Vol. 69, No. 3 (2017)

Authors:Michaela Prokešová; Jiří Dvořák; Eva B. Vedel Jensen Pages: 513 - 542 Abstract: Abstract In the present paper, we discuss and compare several two-step estimation procedures for inhomogeneous shot-noise Cox processes. The intensity function is parametrized by the inhomogeneity parameters while the pair-correlation function is parametrized by the interaction parameters. The suggested procedures are based on a combination of Poisson likelihood estimation of the inhomogeneity parameters in the first step and an adaptation of a method from the homogeneous case for estimation of the interaction parameters in the second step. The adapted methods, based on minimum contrast estimation, composite likelihood and Palm likelihood, are compared both theoretically and by means of a simulation study. The general conclusion from the simulation study is that the three estimation methods have similar performance. Two-step estimation with Palm likelihood has not been considered before and is motivated by the superior performance of the Palm likelihood in the stationary case for estimation of certain parameters of interest. Asymptotic normality of the two-step estimator with Palm likelihood is proved. PubDate: 2017-06-01 DOI: 10.1007/s10463-016-0556-y Issue No:Vol. 69, No. 3 (2017)

Authors:Dominique Fourdrinier; Fatiha Mezoued; William E. Strawderman Pages: 543 - 570 Abstract: Abstract We consider Bayesian estimation of the location parameter \(\theta \) of a random vector X having a unimodal spherically symmetric density \(f(\Vert x - \theta \Vert ^2)\) for a spherically symmetric prior density \(\pi (\Vert \theta \Vert ^2)\) . In particular, we consider minimaxity of the Bayes estimator \(\delta _\pi (X)\) under quadratic loss. When the distribution belongs to the Berger class, we show that minimaxity of \(\delta _\pi (X)\) is linked to the superharmonicity of a power of a marginal associated to a primitive of f. This leads to proper Bayes minimax estimators for certain densities \(f(\Vert x - \theta \Vert ^2)\) . PubDate: 2017-06-01 DOI: 10.1007/s10463-016-0553-1 Issue No:Vol. 69, No. 3 (2017)

Authors:Ke-Hai Yuan; Peter M. Bentler Pages: 571 - 597 Abstract: Abstract In structural equation modeling (SEM), parameter estimates are typically computed by the Fisher-scoring algorithm, which often has difficulty in obtaining converged solutions. Even for simulated data with a correctly specified model, non-converged replications have been repeatedly reported in the literature. In particular, in Monte Carlo studies it has been found that larger factor loadings or smaller error variances in a confirmatory factor model correspond to a higher rate of convergence. However, studies of a ridge method in SEM indicate that adding a diagonal matrix to the sample covariance matrix also increases the rate of convergence for the Fisher-scoring algorithm. This article addresses these two seemingly contradictory phenomena. Using statistical and numerical analyses, the article clarifies why both approaches increase the rate of convergence in SEM. Monte Carlo results confirm the analytical results. Recommendations are provided on how to increase both the speed and rate of convergence in parameter estimation. PubDate: 2017-06-01 DOI: 10.1007/s10463-016-0552-2 Issue No:Vol. 69, No. 3 (2017)

Authors:Shan Luo; Gengsheng Qin Pages: 599 - 626 Abstract: Abstract Low-income proportion is an important index in describing the inequality of an income distribution. It has been widely used by governments in measuring social stability around the world. Established inferential methods for this index are based on the empirical estimator of the index. It may have poor finite sample performances when the real income data are skewed or has outliers. In this paper, based on a smooth estimator for the low-income proportion, we propose a smoothed jackknife empirical likelihood approach for inferences of the low-income proportion. Wilks theorem is obtained for the proposed jackknife empirical likelihood ratio statistic. Various confidence intervals based on the smooth estimator are constructed. Extensive simulation studies are conducted to compare the finite sample performances of the proposed intervals with some existing intervals. Finally, the proposed methods are illustrated by a public income dataset of the professors in University System of Georgia. PubDate: 2017-06-01 DOI: 10.1007/s10463-016-0554-0 Issue No:Vol. 69, No. 3 (2017)

Authors:Ping Wu; Xinchao Luo; Peirong Xu; Lixing Zhu Pages: 627 - 646 Abstract: Abstract In this paper, we consider how to select both the fixed effects and the random effects in linear mixed models. To make variable selection more efficient for such models in which there are high correlations between covariates associated with fixed and random effects, a novel approach is proposed, which orthogonalizes fixed and random effects such that the two sets of effects can be separately selected with less influence on one another. Also, unlike most of existing methods with parametric assumptions, the new method only needs fourth order moments of involved random variables. The oracle property is proved. the performance of our method is examined by a simulation study. PubDate: 2017-06-01 DOI: 10.1007/s10463-016-0555-z Issue No:Vol. 69, No. 3 (2017)

Authors:Elizabeth Gross; Sonja Petrović; Despina Stasi Pages: 673 - 704 Abstract: Abstract Social networks and other sparse data sets pose significant challenges for statistical inference, since many standard statistical methods for testing model/data fit are not applicable in such settings. Algebraic statistics offers a theoretically justified approach to goodness-of-fit testing that relies on the theory of Markov bases. Most current practices require the computation of the entire basis, which is infeasible in many practical settings. We present a dynamic approach to explore the fiber of a model, which bypasses this issue, and is based on the combinatorics of hypergraphs arising from the toric algebra structure of log-linear models. We demonstrate the approach on the Holland–Leinhardt \(p_1\) model for random directed graphs that allows for reciprocation effects. PubDate: 2017-06-01 DOI: 10.1007/s10463-016-0560-2 Issue No:Vol. 69, No. 3 (2017)

Authors:Chang Xuan Mao; Cuiying Yang; Yitong Yang; Wei Zhuang Pages: 705 - 716 Abstract: Abstract The Rasch model has been used to estimate the unknown size of a population from multi-list data. It can take both the list effectiveness and individual heterogeneity into account. Estimating the population size is shown to be equivalent to estimating the odds that an individual is unseen. The odds parameter is nonidentifiable. We propose a sequence of estimable lower bounds, including the greatest one, for the odds parameter. We show that a lower bound can be calculated by linear programming. Estimating a lower bound of the odds leads to an estimator for a lower bound of the population size. A simulation experiment is performed and three real examples are studied. PubDate: 2017-06-01 DOI: 10.1007/s10463-016-0561-1 Issue No:Vol. 69, No. 3 (2017)

Authors:Tomasz J. Kozubowski; Krzysztof Podgórski Abstract: Abstract The Sibuya distribution arises as the distribution of the waiting time for the first success in Bernoulli trials, where the probabilities of success are inversely proportional to the number of a trial. We study a generalization that can be viewed as the distribution of the excess random variable \(N-k\) given \(N>k\) , where N has the Sibuya distribution and k is an integer. We summarize basic facts regarding this distribution and provide several new results and characterizations, shedding more light on its origin and possible applications. In particular, we emphasize the role Sibuya distribution plays in the extreme value theory and point out its invariance property with respect to random thinning operation. PubDate: 2017-06-22 DOI: 10.1007/s10463-017-0611-3

Authors:Fuqi Chen; Rogemar Mamon; Sévérien Nkurunziza Abstract: Abstract Determining accurately when regime and structural changes occur in various time-series data is critical in many social and natural sciences. We develop and show further the equivalence of two consistent estimation techniques in locating the change point under the framework of a generalised version of the one-dimensional Ornstein–Uhlenbeck process. Our methods are based on the least sum of squared error and the maximum log-likelihood approaches. The case where both the existence and the location of the change point are unknown is investigated and an informational methodology is employed to address these issues. Numerical illustrations are presented to assess the methods’ performance. PubDate: 2017-06-19 DOI: 10.1007/s10463-017-0610-4