Metrika
Journal Prestige (SJR): 0.848 Citation Impact (citeScore): 1 Number of Followers: 4 Hybrid journal (It can contain Open Access articles) ISSN (Print) 1435926X  ISSN (Online) 00261335 Published by SpringerVerlag [2467 journals] 
 A novel sequential approach to estimate functions of parameters of two
gamma populations
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Many a times a need may arise to estimate either a certain ratio or the sum of the shape parameters of two independent gamma populations. We try to tackle this problem through appropriate and novel twostage sampling strategies. The first part of this paper deals with developing a twostage methodology to estimate the ratio \(\alpha /(\alpha +\beta )\) coming from two independent gamma populations with parameters \((\alpha ,1)\) and \((\beta ,1)\) respectively. We assume a weighted squared error loss function and aim at controlling the associated risk function per unit cost by bounding it from above by a known constant \(\omega .\) We also establish firstorder properties of our stopping rules. The second part of this paper deals with obtaining a twostage sampling procedure to estimate the sum \(\alpha +\beta \) instead. We also provide extensive simulation analysis and real data analysis using data from cancer studies to show encouraging performances of our proposed stopping strategies.
PubDate: 20221204

 Bayesian estimation for an item response tree model for nonresponse
modeling
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Nonresponse data are common in achievement tests or questionnaires. Chang et al. (Br J Math Stat Psychol 74:487–512, 2021) proposed an Item Response tree model, namely TR4, for modeling some potential mechanisms underlying nonresponses so that the estimates of parameters of interest would not be biased due to missing not at random (Rubin in Biometrika 63:581–592, 1976). TR4 has two notable degenerate cases, both with insightful practical meanings. When TR4 is fitted to data originated from some degenerate cases, there exist model identifiability issues so that the existing frequentist inference for the TR4 model is not suitable. In the current study, we propose a Bayesian estimation procedure that incorporates the Markov chain Monte Carlo technique for estimating the TR4 model. We conducted simulation studies to demonstrate the effectiveness of the Bayesian estimation procedure in solving the model unidentifiability issue. In addition, the TR4 model is further extended in the present study to effectively accommodate the complexity underlying some real data. The advantage of the extended models over TR4 is demonstrated in the real data analysis where we apply our method to the data of a geography test for college admission in Taiwan.
PubDate: 20221101

 Distributionfree specification test for volatility function based on
highfrequency data with microstructure noise
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract In this paper, we propose a twostep test for parametric specification of volatility function based on highfrequency data with microstructure noise. The latent prices are first recovered at high precision under the assumption that the noise is a parametric function of observable trading information. An asymptotically distributionfree test is then built on the estimated latent prices using Khmaladze martingale transformation. We establish asymptotic theory associated with the test under both the null and alternative hypotheses. Moreover, an extension of the proposed method to incorporate intraday pattern is also formally discussed. Simulation results corroborate our theoretical findings demonstrating clear advantage of our method over an existing distributionfree method that does not take microstructure noise into account. We finally apply the test to the highfrequency data of Standard & Poor’s depository receipt (SPDR) that tracks the S&P 500 index.
PubDate: 20221101

 Non asymptotic expansions of the MME in the case of Poisson observations

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract In this paper the problem of one dimensional parameter estimation is considered in the case where observations are coming from inhomogeneous Poisson processes. The method of moments estimation is studied and its stochastic expansion is obtained. This stochastic expansion is then used to obtain the expansion of the moments of the estimator and the expansion of the distribution function. The stochastic expansion, the expansion of the moments and the expansion of distribution function are non asymptotic in nature. Several examples are presented to illustrate the theoretical results.
PubDate: 20221101

 Universally optimal balanced block designs for interference model

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract The interference model has been widely used and studied in block designs where the treatment in a particular plot effects on ones in its neighbor plots. There are many experiments specially in agricultre that block size (k) is greater than the number of treatment (t). If \(k>t\) , the universally optimal designs under the interference models are usually difficult to obtain. In this paper, we consider an interference model with equal left and rightneighbor effects for uncorrelated errors when \(k>t\) . Based on Kushner’s method, we obtain the universally optimal designs on the class of circular block designs. We also present some methods to construct these designs. Then, we generalize our results for onesided interference models.
PubDate: 20221101

 Consistency of the MLE under a twoparameter Gamma mixture model with a
structural shape parameter
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Finite Gamma mixture models are often used to describe randomness in income data, insurance data, and data in applications where the response values are intrinsically positive. The popular likelihood approach for model fitting, however, does not work for this model because its likelihood function is unbounded. Because of this, the maximum likelihood estimator is not welldefined. Other approaches have been developed to achieve consistent estimation of the mixing distribution, such as placing an upper bound on the shape parameter or adding a penalty to the loglikelihood function. In this paper, we show that if the shape parameter in the finite Gamma mixture model is structural, then the direct maximum likelihood estimator of the mixing distribution is welldefined and strongly consistent. We also present simulation results demonstrating the consistency of the estimator. We illustrate the application of the model with a structural shape parameter to household income data. The fitted mixture distribution leads to several possible subpopulation structures with regard to the level of disposable income.
PubDate: 20221101

 On some stochastic comparisons of arithmetic and geometric mixture models

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Most studies on reliability analysis have been conducted in homogeneous populations. However, homogeneous populations can rarely be found in the real world. Populations with specific components, such as lifetime, are usually heterogeneous. When populations are heterogeneous, it raises the question of whether these different modeling analysis strategies might be appropriate and which one of them should be preferred. In this paper, we provide mixture models, which have usually been effective tools for modeling heterogeneity in populations. Specifically, we carry out a stochastic comparison of two arithmetic (finite) mixture models using the majorization concept in the sense of the usual stochastic order, the hazard rate order, the reversed hazard rate order and the dispersive order both for a general case and for some semiparametric families of distributions. Moreover, we obtain sufficient conditions to compare two geometric mixture models. To illustrate the theoretical findings, some relevant examples and counterexamples are presented.
PubDate: 20221018

 Functional singleindex composite quantile regression

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract The functional singleindex model is a very flexible semiparametric model when modeling the relationship between a scalar response and functional predictors. However, the efficiency of the model may be affected by nonnormal errors. So, in this paper, we propose functional single index composite quantile regression. The unknown slope function and link function are estimated by using Bspline basis functions. The convergence rates of the estimators are established. Some simulation studies and an application of NIR spectroscopy dataset are presented to illustrate the performance of the proposed methodologies.
PubDate: 20221012

 A new light on reliability equivalence factors

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract In the reliability theory, the performance of a system can be improved by different methods, such as redundancy and reduction methods. The redundancy method may not be optimal when some restrictions such as volume and weight are crucial. In the reduction method, system reliability is increased by reducing the failure rate of some of its components by a factor \(0<\rho <1\) which is called the reliability equivalence factor (REF). This article considers a new light on reliability equivalence factors in a coherent system with independent components. A closed form for \(\rho \) is obtained when the reduction method is applied on a single component of the system. Based on this, we also define a new measure of component importance. Various numerical illustrative examples are given to support the new results.
PubDate: 20221003

 Prediction of future censored lifetimes from mixture exponential
distribution
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract On the basis of a TypeII censored sample, Barakat et al. (Predicting future lifetimes of mixture exponential distribution, Commun Stat Simul Comput https://doi.org/10.1080/03610918.2020.1715434, 2020) considered the problem of predicting the unobserved censored units from a mixture exponential distribution with known parameters. They then discussed how to use the pivotal quantity for obtaining prediction intervals for nonrandom and random sample size when all parameters are known. In this work, we consider the same problem of prediction where the model parameters involving the scale parameters as well as the mixing proportion parameter are all unknown. Further, we propose different prediction methods for obtaining prediction intervals of future lifetimes including likelihood, highest conditional median, and parametric bootstrap methods. In this setup, two cases are considered. In the first case, we assume that the sample size is nonrandom, while in the second case, the sample size is assumed to be random number. It is shown from our numerical results that the parametric bootstrapbased prediction intervals are comparable in terms of coverage probability and very competitive in terms of average length when compared to all other prediction intervals considered in this paper.
PubDate: 20221001
DOI: 10.1007/s0018402100852z

 Asymptotic Z and chisquared tests with auxiliary information

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract The main goal of this article is to study how an auxiliary information can be used to improve the efficiency of two famous statistical tests: the Ztest and the chisquare test. Many definitions of auxiliary information can be found in the statistical literature. In this article, the notion of auxiliary information is discussed from a very general point of view and depends on the relevant test. These two statistical tests are modified so that this information is taken into account. It is shown in particular that the efficiency of these new tests is improved in the sense of Pitman’s ARE. Some statistical examples illustrate the use of this method.
PubDate: 20221001
DOI: 10.1007/s0018402100853y

 An extension of the Gumbel–Barnett family of copulas

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract The Gumbel–Barnett family of bivariate distributions with given marginals, is frequently used in theory and applications. This family has been generalized in several ways. We propose and study a broad generalization by using two differentiable functions. We obtain some properties and describe particular cases.
PubDate: 20221001
DOI: 10.1007/s00184022008590

 A note on the coverage behaviour of bootstrap percentile confidence
intervals for constrained parameters
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract The asymptotic behaviour of the commonly used bootstrap percentile confidence interval is investigated when the parameters are subject to linear inequality constraints. We concentrate on the important one and twosample problems with data generated from general distributions in the natural exponential family. The focus of this note is on quantifying the coverage probabilities of the parametric bootstrap percentile confidence intervals, in particular their limiting behaviour near boundaries. We propose using a local asymptotic framework to study this subtle coverage behaviour. Under this framework, we discover that when the true parameters are on, or close to, the restriction boundary, the asymptotic coverage probabilities can always exceed the nominal level in the onesample case; however, they can be, remarkably, both under and over the nominal level in the twosample case. Using illustrative examples, we show that the results provide theoretical justification and guidance on applying the bootstrap percentile method to constrained inference problems.
PubDate: 20221001
DOI: 10.1007/s00184021008510

 Estimating a gradual parameter change in an AR(1)process

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract We discuss the estimation of a changepoint \(t_0\) at which the parameter of a (nonstationary) AR(1)process possibly changes in a gradual way. Making use of the observations \(X_1,\ldots ,X_n\) , we shall study the least squares estimator \(\widehat{t}_0\) for \(t_0\) , which is obtained by minimizing the sum of squares of residuals with respect to the given parameters. As a first result it can be shown that, under certain regularity and moment assumptions, \(\widehat{t}_0/n\) is a consistent estimator for \(\tau _0\) , where \(t_0 =\lfloor n\tau _0\rfloor \) , with \(0<\tau _0<1\) , i.e., \(\widehat{t}_0/n \,{\mathop {\rightarrow }\limits ^{P}}\,\tau _0\) \((n\rightarrow \infty )\) . Based on the rates obtained in the proof of the consistency result, a first, but rough, convergence rate statement can immediately be given. Under somewhat stronger assumptions, a precise rate can be derived via the asymptotic normality of our estimator. Some results from a small simulation study are included to give an idea of the finite sample behaviour of the proposed estimator.
PubDate: 20221001
DOI: 10.1007/s0018402100844z

 Statistical analysis of the nonergodic fractional Ornsteinâ€“Uhlenbeck
process with periodic mean
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Consider a periodic, meanreverting Ornstein–Uhlenbeck process \(X=\{X_t,t\ge 0\}\) of the form \(d X_{t}=\left( L(t)+\alpha X_{t}\right) d t+ dB^H_{t}, \quad t \ge 0\) , where \(L(t)=\sum _{i=1}^{p}\mu _i\phi _i (t)\) is a periodic parametric function, and \(\{B^H_t,t\ge 0\}\) is a fractional Brownian motion of Hurst parameter \(\frac{1}{2}\le H<1\) . In the “ergodic” case \(\alpha <0\) , the parametric estimation of \((\mu _1,\ldots ,\mu _p,\alpha )\) based on continuoustime observation of X has been considered in Dehling et al. (Stat Inference Stoch Process 13:175–192, 2010; Stat Inference Stoch Process 20:1–14, 2016) for \(H=\frac{1}{2}\) , and \(\frac{1}{2}<H<1\) , respectively. In this paper we consider the “nonergodic” case \(\alpha >0\) , and for all \(\frac{1}{2}\le H<1\) . We analyze the strong consistency and the asymptotic distribution for the estimator of \((\mu _1,\ldots ,\mu _p,\alpha )\) when the whole trajectory of X is observed.
PubDate: 20221001
DOI: 10.1007/s0018402100854x

 Predicting future failure times by using quantile regression

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract The purpose of the paper is to study how to predict the future failure times in a sample from the early failures (type II censored data). We consider both the case of independent and dependent lifetimes. In both cases we assume identically distributed random variables. To predict the future failures we use quantile regression techniques that also provide prediction regions for them. Some illustrative examples show how to apply the theoretical results to simulated and real data sets.
PubDate: 20220927

 Design efficiency for minimum projection uniform designs with q levels

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Minimum projection uniform designs and high efficient designs are two kinds of excellent designs in design of experiment. In this paper, design efficiency for minimum projection uniform designs with q levels is discussed. Firstly, the uniformity pattern of qlevel designs is proposed based on the centered \(L_2\) discrepancy. Secondly, the analytical connection between uniformity pattern and design efficiency is established for the qlevel orthogonal arrays with strength 2, and for the orthogonal arrays with strength 3, the minimum projection uniformity criterion is equivalent to the design efficiency criterion. Finally, a tight lower bound of uniformity pattern is presented, which is used as a benchmark for measuring the uniformity of projection designs.
PubDate: 20220919
DOI: 10.1007/s0018402200885y

 On the consistency of mode estimate for spatially dependent data

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract This paper is concerned with estimating the density mode for random field by kernel method under some \(\alpha \) mixing condition. The almost sure uniform convergence of the density estimator is proved. The rate of almost sure uniform convergence of the density gradient estimator is given under mild conditions. The unknown density is supposed unimodal and its mode is estimated by a kernel estimate. The strong consistency of the mode estimate is investigated and the rate of convergence is given. An optimal bandwidth selection procedure is proposed and a simulation study is used to obtain empirical results.
PubDate: 20220914
DOI: 10.1007/s0018402200879w

 A semiparametric multiply robust multiple imputation method for causal
inference
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Evaluating the impact of nonrandomized treatment on various health outcomes is difficult in observational studies because of the presence of covariates that may affect both the treatment or exposure received and the outcome of interest. In the present study, we develop a semiparametric multiply robust multiple imputation method for estimating average treatment effects in such studies. Our method combines information from multiple propensity score models and outcome regression models, and is multiply robust in that it produces consistent estimators for the average causal effects if at least one of the models is correctly specified. Our proposed estimators show promising performances even with incorrect models. Compared with existing fully parametric approaches, our proposed method is more robust against model misspecifications. Compared with fully nonparametric approaches, our proposed method does not have the problem of curse of dimensionality and achieves dimension reduction by combining information from multiple models. In addition, it is less sensitive to the extreme propensity score estimates compared with inverse propensity score weighted estimators and augmented estimators. The asymptotic properties of our method are developed and the simulation study shows the advantages of our proposed method compared with some existing methods in terms of balancing efficiency, bias, and coverage probability. Rubin’s variance estimation formula can be used for estimating the variance of our proposed estimators. Finally, we apply our method to 2009–2010 National Health Nutrition and Examination Survey to examine the effect of exposure to perfluoroalkyl acids on kidney function.
PubDate: 20220912
DOI: 10.1007/s00184022008830

 A note on the partial likelihood estimator of the proportional hazards
model for combined incident and prevalent cohort data
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract The proportional hazards model has been well studied in the literature for estimating the effect of covariate data on the failure time hazard rate. This model is routinely applied to rightcensored incident cohort failure time data as well as lefttruncated rightcensored failure time data obtained from a prevalent cohort study with followup. In a metaanalysis or complex study design, data from both incident cohort and prevalent cohort studies with followup may be available. We compare two partial likelihood estimation approaches for the covariate effects using combined incident and prevalent cohort data under the proportional hazards model. We validate the partial likelihood methods through the concept of ancillarity and utilize simulated cohort data to compare the two procedures.
PubDate: 20220909
DOI: 10.1007/s00184022008821
