Abstract: Abstract In this paper we develop both frequentist and Bayesian estimation methodologies for parameters of an Exponential–Logarithmic Distribution under Type-I hybrid censoring. In frequentist approach, it is observed that the Maximum Likelihood Estimators (MLEs) do not have closed form expressions. We use both the EM and SEM algorithms to compute the MLEs and using the missing information principle obtain the observed Fisher information matrix which is then used to construct the asymptotic confidence intervals. Further, two bootstrap interval estimates are proposed for the unknown parameters. Under squared error loss and LINEX loss functions, we obtain Bayes estimates of the unknown parameters assuming independent gamma and beta priors using the Lindley method, Tierney–Kadane method and the importance sampling procedure. The problem of prediction is also explored. A real life data set as well as simulated data have been analyzed for illustrative purposes. PubDate: 2019-03-18 DOI: 10.1007/s00362-019-01100-3

Abstract: Abstract Ranked set sampling (RSS) is an efficient method for estimating parameters when exact measurement of observation is difficult and/or expensive. In the current paper, several traditional and ad hoc estimators of the scale and shape parameters \(\theta \) and \(\alpha \) from the Pareto distribution \(p(\theta ,\alpha )\) will be respectively studied in cases when one parameter is known and when both are unknown under simple random sampling, RSS and some of its modifications such as extreme RSS(ERSS) and median RSS(MRSS). It is found for estimating of \(\theta \) from \(p(\theta ,\alpha )\) in which \(\alpha \) is known, the best linear unbiased estimator (BLUE) under ERSS is more efficient than the other estimators under the other sampling techniques. For estimating of \(\alpha \) from \(p(\theta ,\alpha )\) in which \(\theta \) is known, the modified BLUE under MRSS is more efficient than the other estimators under the other sampling techniques. For estimating of \(\theta \) and \(\alpha \) from \(p(\theta ,\alpha )\) in which both are unknown, the ad hoc estimators under ERSS are more efficient than the other estimators under the other sampling techniques. All efficiencies of these estimators are simulated under imperfect ranking. A real data set is used for illustration. PubDate: 2019-03-14 DOI: 10.1007/s00362-019-01102-1

Abstract: Abstract Consider an experiment in which the primary objective is to determine the significance of a treatment effect at a predetermined type I error and statistical power. Assume that the sample size required to maintain these type I error and power will be re-estimated at an interim analysis. A secondary objective is to estimate the treatment effect. Our main finding is that the asymptotic distributions of standardized statistics are random mixtures of distributions, which are non-normal except under certain model choices for sample size re-estimation (SSR). Monte-Carlo simulation studies and an illustrative example highlight the fact that asymptotic distributions of estimators with SSR may differ from the asymptotic distribution of the same estimators without SSR. PubDate: 2019-02-28 DOI: 10.1007/s00362-019-01095-x

Abstract: Abstract Compositional data modeling is of great practical importance, as exemplified by applications in economic and geochemical data analysis. In this study, we investigate the sliced inverse regression (SIR) procedure for multivariate compositional data with a scalar response. We can achieve dimension reduction for the original multivariate compositional data quickly and then conduct a regression on the dimensional-reduced compositions. It is documented that the proposed method is successful in detecting effective dimension reduction directions, which generalizes the theoretical framework of SIR to multivariate compositional data. Comprehensive simulation studies are conducted to evaluate the performance of the proposed SIR procedure and the simulation results show its feasibility and effectiveness. A real data application is finally used to illustrate the success of the proposed SIR-based method. PubDate: 2019-02-22 DOI: 10.1007/s00362-019-01093-z

Abstract: Abstract The statistical inference of multicomponent stress-strength reliability under the adaptive Type-II hybrid progressive censored samples for the Weibull distribution is considered. It is assumed that both stress and strength are two Weibull independent random variables. We study the problem in three cases. First assuming that the stress and strength have the same shape parameter and different scale parameters, the maximum likelihood estimation (MLE), approximate maximum likelihood estimation (AMLE) and two Bayes approximations, due to the lack of explicit forms, are derived. Also, the asymptotic confidence intervals, two bootstrap confidence intervals and highest posterior density (HPD) credible intervals are obtained. In the second case, when the shape parameter is known, MLE, exact Bayes estimation, uniformly minimum variance unbiased estimator (UMVUE) and different confidence intervals (asymptotic and HPD) are studied. Finally, assuming that the stress and strength have the different shape and scale parameters, ML, AML and Bayesian estimations on multicomponent reliability have been considered. The performances of different methods are compared using the Monte Carlo simulations and for illustrative aims, one data set is investigated. PubDate: 2019-02-14 DOI: 10.1007/s00362-019-01094-y

Abstract: Abstract By the affine resolvable design theory, there are 68 non-isomorphic classes of symmetric orthogonal designs involving 13 factors with 3 levels and 27 runs. This paper gives a comprehensive study of all these 68 non-isomorphic classes from the viewpoint of the uniformity criteria, generalized word-length pattern and Hamming distance pattern, which provides some interesting projection and level permutation behaviors of these classes. Selecting best projected level permuted subdesigns with \(3\le k\le 13\) factors from all these 68 non-isomorphic classes is discussed via these three criteria with catalogues of best values. New recommended uniform minimum aberration and minimum Hamming distance designs are given for investigating either qualitative or quantitative \(4\le k\le 13\) factors, which perform better than the existing recommended designs in literature and the existing uniform designs. A new efficient technique for detecting non-isomorphic designs is given via these three criteria. By using this new approach, in all projections into \(1\le k\le 13\) factors we classify each class from these 68 classes to non-isomorphic subclasses and give the number of isomorphic designs in each subclass. Close relationships among these three criteria and lower bounds of the average uniformity criteria are given as benchmarks for selecting best designs. PubDate: 2019-02-09 DOI: 10.1007/s00362-019-01089-9

Abstract: Abstract Compound Cox processes (CCP) are flexible marked point processes due to the stochastic nature of their intensity. This paper states closed-form expressions of their counting and time statistics in terms of the intensity and of the mean processes. They are forecast by means of principal components prediction models applied to the mean process in order to reach attainable results. A proposition proves that only weak restrictions are needed to estimate the probability of a new occurrence. Additionally, the phase type process is introduced, which important feature is that its marginal distributions are phase type with random parameters. Since any non-negative variable can be approximated by a phase-type distribution, the new stochastic process is proposed to model the intensity process of any point process. The CCP with this type of intensity provides an especially general model. Several simulations and the corresponding study of the estimation errors illustrate the results and their accuracy. Finally, an application to real data is performed; extreme temperatures in the South of Spain are modeled by a CPP and forecast. PubDate: 2019-02-07 DOI: 10.1007/s00362-019-01092-0

Abstract: Abstract Consider two independent normal populations with a common variance and ordered means. For this model, we study the problem of estimating a common variance and a common precision with respect to a general class of scale invariant loss functions. A general minimaxity result is established for estimating the common variance. It is shown that the best affine equivariant estimator and the restricted maximum likelihood estimator are inadmissible. In this direction, we derive a Stein-type improved estimator. We further derive a smooth estimator which improves upon the best affine equivariant estimator. In particular, various scale invariant loss functions are considered and several improved estimators are presented. Furthermore, a simulation study is performed to find the performance of the improved estimators developed in this paper. Similar results are obtained for the problem of estimating a common precision for the stated model under a general class of scale invariant loss functions. PubDate: 2019-02-06 DOI: 10.1007/s00362-019-01090-2

Abstract: Abstract We propose here a novel functional inverse regression method (i.e., functional surrogate assisted slicing) for functional data with binary responses. Previously developed method (e.g., functional sliced inverse regression) can detect no more than one direction in the functional sufficient dimension reduction subspace. In contrast, the proposed new method can detect multiple directions. The population properties of the proposed method is established. Furthermore, we propose a new method to estimate the functional central space which do not need the inverse of the covariance operator. To practically determine the structure dimension of the functional sufficient dimension reduction subspace, a modified Bayesian information criterion method is proposed. Numerical studies based on both simulated and real data sets are presented. PubDate: 2019-02-04 DOI: 10.1007/s00362-019-01083-1

Abstract: Abstract Rosadi and Peiris (Comput Stat 29:931–943, 2014) applied the second-order least squares estimator (SLS), which was proposed in Wang and Leblanc (Ann Inst of Stat Math 60:883–900, 2008), to regression models with autoregressive errors. In case of autocorrelated errors, it shows that the SLS performs well for estimating the parameters of the model and gives small bias. For less correlated data, the standard error (SE) of the SLS lies between the SE of the ordinary least squares estimator (OLS) and the generalized least squares estimator, however, for more correlated data, the SLS has higher SE than the OLS estimator. In case of a regression model with iid errors, Chen, Tsao and Zhou (Stat Pap 53:371–386, 2012) proposed a method to improve the robustness of the SLS against X-outliers. In this paper, we consider a new robust second-order least squares estimator (RSLS), which extends the study in Chen et al. (2012) to the case of regression with autoregressive errors, and the data may be contaminated with all types of outliers (X-, y-, and innovation outliers). Besides the regression coefficients, here we also propose a robust method to estimate the parameters of the autoregressive errors and the variance of the errors. We evaluate the performance of the RSLS by means of simulation studies. In the simulation study, we consider both a linear and a nonlinear regression model. The results show that the RSLS performs very well. We also provide guidelines to use the RSLS in practice and present a real example. PubDate: 2019-02-01 DOI: 10.1007/s00362-016-0829-9

Abstract: Abstract Adopting likelihood based methods of inference in the case of informative sampling often presents a number of difficulties, particularly, if the parametric form of the model that describes the sample selection mechanism is unknown, and thus requires application of some model selection approach. These difficulties generally arise either due to complexity of the model holding in the sample, or due to identifiability problems. As a remedy we propose alternative approach to model selection and estimation in the case of informative sampling. Our approach is based on weighted estimation equations, where the contribution to the estimation equation from each observation is weighted by the inverse probability of being selected. We show how weighted estimation equations can be incorporated in a Bayesian analysis, and how the full Bayesian significance test can be implemented as a model selection tool. We illustrate the efficiency of the proposed methodology by a simulation study. PubDate: 2019-02-01 DOI: 10.1007/s00362-016-0828-x

Abstract: Abstract A generalized least squares estimation method with inequality constraints for the autoregressive conditional duration model is proposed in this paper. The estimation procedure includes three stages. The final generalized least-squares estimator is consistent and \(\sqrt{T}\) —asymptotically normal distributed. Our estimator has the advantage over the often used quasi-maximum likelihood estimator in which it easily implemented and does not require the choice of initial values for the iterative optimization procedure. A large number of simulation studies confirm our theoretical results and suggest that the proposed estimator is more robust compared to quasi-maximum likelihood estimator. An application to IBM volume duration shows that the performance of the proposed estimation is better than quasi-maximum likelihood estimation in forecasting. PubDate: 2019-02-01 DOI: 10.1007/s00362-016-0830-3

Abstract: Abstract Detecting a quantitative trait locus, so-called QTL (a gene influencing a quantitative trait which is able to be measured), on a given chromosome is a major problem in Genetics. We study a population structured in families and we assume that the QTL location is the same for all the families. We consider the likelihood ratio test (LRT) process related to the test of the absence of QTL on the interval [0, T] representing a chromosome. We give the asymptotic distribution of the LRT process under the null hypothesis that there is no QTL in any families and under local alternative with a QTL at \(t^{\star }\in [0, T]\) in at least one family. We show that the LRT is asymptotically the supremum of the sum of the square of independent interpolated Gaussian processes. The number of processes corresponds to the number of families. We propose several new methods to compute critical values for QTL detection. Since all these methods rely on asymptotic results, the validity of the asymptotic assumption is checked using simulated data. Finally we show how to optimize the QTL detecting process. PubDate: 2019-02-01 DOI: 10.1007/s00362-016-0835-y

Abstract: Abstract Most of the existing Bayesian nonparametric models for spatial areal data assume that the neighborhood structures are known, however in practice this assumption may not hold. In this paper, we develop an area-specific stick breaking process for distributions of random effects with the spatially-dependent weights arising from the block averaging of underlying continuous surfaces. We show that this prior, which does not depend on specifying neighboring schemes, is noticeably flexible in effectively capturing heterogeneity in spatial dependency across areas. We illustrate the methodology with a dataset involving expenditure credit of 31 provinces of Iran. PubDate: 2019-02-01 DOI: 10.1007/s00362-016-0833-0

Abstract: Abstract When analyzing time series which are supposed to exhibit long-range dependence (LRD), a basic issue is the estimation of the LRD parameter, for example the Hurst parameter \(H \in (1/2, 1)\) . Conventional estimators of H easily lead to spurious detection of long memory if the time series includes a shift in the mean. This defect has fatal consequences in change-point problems: Tests for a level shift rely on H, which needs to be estimated before, but this estimation is distorted by the level shift. We investigate two blocks approaches to adapt estimators of H to the case that the time series includes a jump and compare them with other natural techniques as well as with estimators based on the trimming idea via simulations. These techniques improve the estimation of H if there is indeed a change in the mean. In the absence of such a change, the methods little affect the usual estimation. As adaption, we recommend an overlapping blocks approach: If one uses a consistent estimator, the adaption will preserve this property and it performs well in simulations. PubDate: 2019-02-01 DOI: 10.1007/s00362-016-0839-7

Abstract: Abstract In this paper, two new classes of estimators called the restricted almost unbiased ridge-type principal components estimator and the restricted almost unbiased Liu-type principal components estimator are introduced. For the two cases when the restrictions are true and not true, necessary and sufficient conditions for the superiority of the proposed estimators are derived and compared, respectively. Finally, A Monte Carlo simulation study is given to illustrate the performance of the proposed estimators. PubDate: 2019-02-01 DOI: 10.1007/s00362-016-0821-4