Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract On the basis of a Type-II censored sample, Barakat et al. (Predicting future lifetimes of mixture exponential distribution, Commun Stat Simul Comput https://doi.org/10.1080/03610918.2020.1715434, 2020) considered the problem of predicting the unobserved censored units from a mixture exponential distribution with known parameters. They then discussed how to use the pivotal quantity for obtaining prediction intervals for non-random and random sample size when all parameters are known. In this work, we consider the same problem of prediction where the model parameters involving the scale parameters as well as the mixing proportion parameter are all unknown. Further, we propose different prediction methods for obtaining prediction intervals of future lifetimes including likelihood, highest conditional median, and parametric bootstrap methods. In this set-up, two cases are considered. In the first case, we assume that the sample size is non-random, while in the second case, the sample size is assumed to be random number. It is shown from our numerical results that the parametric bootstrap-based prediction intervals are comparable in terms of coverage probability and very competitive in terms of average length when compared to all other prediction intervals considered in this paper. PubDate: 2022-10-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The main goal of this article is to study how an auxiliary information can be used to improve the efficiency of two famous statistical tests: the Z-test and the chi-square test. Many definitions of auxiliary information can be found in the statistical literature. In this article, the notion of auxiliary information is discussed from a very general point of view and depends on the relevant test. These two statistical tests are modified so that this information is taken into account. It is shown in particular that the efficiency of these new tests is improved in the sense of Pitman’s ARE. Some statistical examples illustrate the use of this method. PubDate: 2022-10-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The Gumbel–Barnett family of bivariate distributions with given marginals, is frequently used in theory and applications. This family has been generalized in several ways. We propose and study a broad generalization by using two differentiable functions. We obtain some properties and describe particular cases. PubDate: 2022-10-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The asymptotic behaviour of the commonly used bootstrap percentile confidence interval is investigated when the parameters are subject to linear inequality constraints. We concentrate on the important one- and two-sample problems with data generated from general distributions in the natural exponential family. The focus of this note is on quantifying the coverage probabilities of the parametric bootstrap percentile confidence intervals, in particular their limiting behaviour near boundaries. We propose using a local asymptotic framework to study this subtle coverage behaviour. Under this framework, we discover that when the true parameters are on, or close to, the restriction boundary, the asymptotic coverage probabilities can always exceed the nominal level in the one-sample case; however, they can be, remarkably, both under and over the nominal level in the two-sample case. Using illustrative examples, we show that the results provide theoretical justification and guidance on applying the bootstrap percentile method to constrained inference problems. PubDate: 2022-10-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We discuss the estimation of a change-point \(t_0\) at which the parameter of a (non-stationary) AR(1)-process possibly changes in a gradual way. Making use of the observations \(X_1,\ldots ,X_n\) , we shall study the least squares estimator \(\widehat{t}_0\) for \(t_0\) , which is obtained by minimizing the sum of squares of residuals with respect to the given parameters. As a first result it can be shown that, under certain regularity and moment assumptions, \(\widehat{t}_0/n\) is a consistent estimator for \(\tau _0\) , where \(t_0 =\lfloor n\tau _0\rfloor \) , with \(0<\tau _0<1\) , i.e., \(\widehat{t}_0/n \,{\mathop {\rightarrow }\limits ^{P}}\,\tau _0\) \((n\rightarrow \infty )\) . Based on the rates obtained in the proof of the consistency result, a first, but rough, convergence rate statement can immediately be given. Under somewhat stronger assumptions, a precise rate can be derived via the asymptotic normality of our estimator. Some results from a small simulation study are included to give an idea of the finite sample behaviour of the proposed estimator. PubDate: 2022-10-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Consider a periodic, mean-reverting Ornstein–Uhlenbeck process \(X=\{X_t,t\ge 0\}\) of the form \(d X_{t}=\left( L(t)+\alpha X_{t}\right) d t+ dB^H_{t}, \quad t \ge 0\) , where \(L(t)=\sum _{i=1}^{p}\mu _i\phi _i (t)\) is a periodic parametric function, and \(\{B^H_t,t\ge 0\}\) is a fractional Brownian motion of Hurst parameter \(\frac{1}{2}\le H<1\) . In the “ergodic” case \(\alpha <0\) , the parametric estimation of \((\mu _1,\ldots ,\mu _p,\alpha )\) based on continuous-time observation of X has been considered in Dehling et al. (Stat Inference Stoch Process 13:175–192, 2010; Stat Inference Stoch Process 20:1–14, 2016) for \(H=\frac{1}{2}\) , and \(\frac{1}{2}<H<1\) , respectively. In this paper we consider the “non-ergodic” case \(\alpha >0\) , and for all \(\frac{1}{2}\le H<1\) . We analyze the strong consistency and the asymptotic distribution for the estimator of \((\mu _1,\ldots ,\mu _p,\alpha )\) when the whole trajectory of X is observed. PubDate: 2022-10-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Minimum projection uniform designs and high efficient designs are two kinds of excellent designs in design of experiment. In this paper, design efficiency for minimum projection uniform designs with q levels is discussed. Firstly, the uniformity pattern of q-level designs is proposed based on the centered \(L_2\) -discrepancy. Secondly, the analytical connection between uniformity pattern and design efficiency is established for the q-level orthogonal arrays with strength 2, and for the orthogonal arrays with strength 3, the minimum projection uniformity criterion is equivalent to the design efficiency criterion. Finally, a tight lower bound of uniformity pattern is presented, which is used as a benchmark for measuring the uniformity of projection designs. PubDate: 2022-09-19

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract This paper is concerned with estimating the density mode for random field by kernel method under some \(\alpha \) -mixing condition. The almost sure uniform convergence of the density estimator is proved. The rate of almost sure uniform convergence of the density gradient estimator is given under mild conditions. The unknown density is supposed unimodal and its mode is estimated by a kernel estimate. The strong consistency of the mode estimate is investigated and the rate of convergence is given. An optimal bandwidth selection procedure is proposed and a simulation study is used to obtain empirical results. PubDate: 2022-09-14

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Evaluating the impact of non-randomized treatment on various health outcomes is difficult in observational studies because of the presence of covariates that may affect both the treatment or exposure received and the outcome of interest. In the present study, we develop a semiparametric multiply robust multiple imputation method for estimating average treatment effects in such studies. Our method combines information from multiple propensity score models and outcome regression models, and is multiply robust in that it produces consistent estimators for the average causal effects if at least one of the models is correctly specified. Our proposed estimators show promising performances even with incorrect models. Compared with existing fully parametric approaches, our proposed method is more robust against model misspecifications. Compared with fully non-parametric approaches, our proposed method does not have the problem of curse of dimensionality and achieves dimension reduction by combining information from multiple models. In addition, it is less sensitive to the extreme propensity score estimates compared with inverse propensity score weighted estimators and augmented estimators. The asymptotic properties of our method are developed and the simulation study shows the advantages of our proposed method compared with some existing methods in terms of balancing efficiency, bias, and coverage probability. Rubin’s variance estimation formula can be used for estimating the variance of our proposed estimators. Finally, we apply our method to 2009–2010 National Health Nutrition and Examination Survey to examine the effect of exposure to perfluoroalkyl acids on kidney function. PubDate: 2022-09-12

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The proportional hazards model has been well studied in the literature for estimating the effect of covariate data on the failure time hazard rate. This model is routinely applied to right-censored incident cohort failure time data as well as left-truncated right-censored failure time data obtained from a prevalent cohort study with follow-up. In a meta-analysis or complex study design, data from both incident cohort and prevalent cohort studies with follow-up may be available. We compare two partial likelihood estimation approaches for the covariate effects using combined incident and prevalent cohort data under the proportional hazards model. We validate the partial likelihood methods through the concept of ancillarity and utilize simulated cohort data to compare the two procedures. PubDate: 2022-09-09

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Fourier-cosine models, rooted in the discrete cosine transformation, are widely used in numerous applications in science and engineering. Because the selection of design points where data are collected greatly affects the modeling process, we study the choice of fractional factorial designs for fitting Fourier-cosine models. We propose a new type of generalized resolution and provide a framework for the construction of fractional factorial designs with the maximum generalized resolution. The construction applies level permutations to regular designs with a novel nonlinear transformation. A series of theoretical results are developed to characterize the properties of the level-permuted designs. Based on the theory, we further provide efficient methods for constructing designs with high resolutions without any computer search. Examples are given to show the advantages of the constructed designs over existing ones. PubDate: 2022-09-05

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In the Master-Worker distributed structure, this paper provides a regularized gradient-enhanced loss (GEL) function based on the high-dimensional large-scale linear regression with SCAD and adaptive LASSO penalty. The importance and originality of this paper have two aspects: (1) Computationally, to take full advantage of the computing power of each machine and speed up the convergence, our proposed distributed upgraded estimation method can make all Workers optimize their corresponding GEL functions in parallel, and the results are then aggregated by the Master; (2) In terms of communication, the proposed modified proximal alternating direction method of the multipliers (ADMM) algorithm is comparable to the Centralize method based on the full sample during a few rounds of communication. Under some mild assumptions, we establish the Oracle properties of the SCAD and adaptive LASSO penalized linear regression. The finite sample properties of the newly suggested method are assessed through simulation studies. An application to the HIV drug susceptibility study demonstrates the utility of the proposed method in practice. PubDate: 2022-08-11

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In the linear regression model with possibly autoregressive errors, we construct a family of nonparametric tests for significance of regression, under a nuisance autoregression of model errors. The tests avoid an estimation of nuisance parameters, in contrast to the tests proposed in the literature. A simulation study illustrate their good performance. PubDate: 2022-08-03

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Strong orthogonal arrays (SOAs) have received more and more attention recently since they enjoy more desirable space-filling properties than ordinary orthogonal arrays. Among them, the SOAs of strength \(2+\) are the most advisable as they satisfy the same two-dimensional space-filling property as SOAs of strength 3 while having more columns for given run sizes. In addition, column-orthogonality is also a desirable property for designs of computer experiments. Existing column-orthogonal SOAs of strength \(2+\) have limited columns. In this paper, we propose a new class of space-filling designs, called group SOAs of strength \(2+\) , and provide construction methods for such designs. The proposed designs can accommodate more columns than column-orthogonal SOAs of strength \(2+\) for given run sizes while satisfying similar stratifications and retaining a high proportion of column-orthogonal columns. Orthogonal arrays and difference schemes play important roles in the construction. The construction procedures are easy to implement and a large amount of group SOAs with \(s^2\) levels are constructed where \(s \ge 2\) is a prime power. In addition, the run sizes of the constructed designs are s times the ones of the orthogonal arrays used in the construction procedure. Thus they are relatively flexible. PubDate: 2022-08-01 DOI: 10.1007/s00184-021-00843-0

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Uniform designs have been widely used in physical and computer experiments due to their robust performances. The level permutation method can efficiently construct uniform designs with both lower discrepancy and less aberration. However, the related existing literature has mostly discussed uniform fixed-level designs, the construction of uniform mixed-level designs has been quite few studied. In this paper, a novel level permutation method for constructing uniform mixed-level designs is proposed. Our main idea is to perform level permutations on a new class of designs, called minimum average discrepancy designs, rather than generalized minimum aberration designs as in the fixed-level case. Several theoretical results on the design optimality and construction are obtained. Numerical results suggest the good performance of the resulting designs under various popular discrepancies. PubDate: 2022-08-01 DOI: 10.1007/s00184-021-00850-1

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Fan et al. (Ann Stat 47(6):3009–3031, 2019) constructed a distributed principal component analysis (PCA) algorithm to reduce the communication cost between multiple servers significantly. However, their algorithm’s guarantee is only for sub-Gaussian data. Spurred by this deficiency, this paper enhances the effectiveness of their distributed PCA algorithm by utilizing robust covariance matrix estimators of Minsker (Ann Stat 46(6A):2871–2903, 2018) and Ke et al. (Stat Sci 34(3):454–471, 2019) to tame heavy-tailed data. The theoretical results demonstrate that when the sampling distribution is symmetric innovation with the bounded fourth moment or asymmetric with the finite 6th moment, the statistical error rate of the final estimator produced by the robust algorithm is similar to that of sub-Gaussian tails. Extensive numerical trials support the theoretical analysis and indicate that our algorithm is robust to heavy-tailed data and outliers. PubDate: 2022-08-01 DOI: 10.1007/s00184-021-00848-9

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We prove large (and moderate) deviations for a class of linear combinations of spacings generated by i.i.d. exponentially distributed random variables. We allow a wide class of coefficients which can be expressed in terms of continuous functions defined on [0, 1] which satisfy some suitable conditions. In this way we generalize some recent results by Giuliano et al. (J Statist Plann Inference 157–158:77–89, 2015) which concern the empirical cumulative entropies defined in Di Crescenzo et al. (J Statist Plann Inference 139:4072–4087, 2009a). PubDate: 2022-08-01 DOI: 10.1007/s00184-021-00849-8

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The purpose of the paper is to provide a general method based on conditional quantile curves to predict record values from preceding records. The predictions are based on conditional median (or median regression) curves. Moreover, conditional quantiles curves are used to provide confidence bands for these predictions. The method is based on the recently introduced concept of multivariate distorted distributions that are used instead of copulas to represent the dependence structure. This concept allows us to compute the conditional quantile curves in a simple way. The theoretical findings are illustrated with a non-parametric model (standard uniform), two parametric models (exponential and Pareto), and a non-parametric procedure for the general case. A real data set and a simulated case study in reliability are analysed. PubDate: 2022-08-01 DOI: 10.1007/s00184-021-00847-w

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Li et al. (Comm Statist Theory Methods 49: 924–941, 2020) introduced the concept of inverse Yates-order (IYO) designs, and obtained most of two-level IYO designs have general minimum lower-order confounding (GMC) property. For this reason, the paper extends two-level IYO designs to three-level cases. We first propose the definition of \(3^{n-m}\) IYO design \(D_q(n)\) from the saturated design \(H_q\) with three levels. Then, the formulas of lower-order confounding are obtained according to the factor number of \(3^{n-m}\) IYO design: (i) \(q<n<3^{q-1}\) , and (ii) \(3^{q-1}\le n\le (N-1)/2\) , where \(N=3^{n-m}\) . Under case (ii), we obtain the explicit expressions of lower-order confounding for four structure types of IYO designs. Some examples are given to illustrate the theoretical results. Compared with GMC designs, three-level IYO designs with 27- and 81-run are tabulated to show that some of them have GMC property through lower-order confounding. PubDate: 2022-07-28

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We focus on estimating daily integrated volatility (IV) by realized measures based on intraday returns following a discrete-time stochastic model with a pronounced intraday periodicity (IP). We demonstrate that neglecting the IP-impact on realized estimators may lead to invalid statistical inference concerning IV for a common finite number of intraday returns. For a given IP functional form, we analytically derive robust IP-correction factors for realized measures of IV as well as their asymptotic distributions. We show both in Monte Carlo simulations and empirically that the proposed bias corrections are the robust way to account for IP by computing realized estimators. PubDate: 2022-07-16 DOI: 10.1007/s00184-022-00875-0