Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: We consider here the use of the solution for the first multiplicity of types equation to compute exact probability distributions of statistical values and their exact approximations. We consider \({\Delta}\) -exact distributions as their exact approximations; \({\Delta}\) -exact distributions differ from exact distributions by no more than a predetermined, arbitrarily small value \({\Delta}\) . It is shown that the basis for the exact distribution computing method is an enumeration of search area elements for solution of a linear first multiplicity of type equation composed of multiplicity type vectors. Each element represents here the number of occurrences for elements of a certain type (any sign of an alphabet) in the considered sample. It is shown simultaneously, that the method for restricting the search area for solution of the first multiplicity of type equation is applied for calculating exact approximation. We give an expression defining the algorithmic complexity of exact distributions calculated using the first multiplicity solution method which is finite and allows for each value of alphabet power to determine the maximum sample size for which exact distributions can be calculated by the first multiplicity solution method using limited computing power. To estimate the algorithmic complexity of computing the exact approximations, we used the expression obtained for the first time for the number of first multiplicity equation’s solutions with limitation on the values of coordinates of solution vectors. An expression determining algorithmic complexity for computing the exact approximations using the solution method for the first multiplicity equation with the constraint on the values of solution vector coordinates was obtained. The statistic value of maximal frequency is used as a parameter for restricting the solution vector coordinates, the probability of its excess is less than a pre-determined, arbitrarily small value \({\Delta}\) . This permits to calculate the exact approximations of the distributions differing from their exact distribution values by no more than a chosen value \({\Delta}\) . Results for calculating the maximum sample sizes for which exact approximations can be computed are given. It is shown that the algorithmic complexity of computing exact distributions by many orders of magnitude exceeds the complexity of computing their exact approximations. It is shown that application of the first multiplicity method for computing exact approximations allows increasing the volume of samples by a factor of two or more for equal values of the alphabet power as compared to computing exact distributions. PubDate: 2020-10-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: There are many collaborative studies where the data are discrepant while uncertainty estimates reported in each study cannot be relied upon. The classical commonly used random effects model explains this phenomenon by additional noise with a constant heterogeneity variance. This assumption may be inadequate especially when the smallest uncertainty values correspond to the cases which are most deviant from the bulk of data. An augmented random effects model for meta-analysis of such studies is offered. It proposes to think about the data as consisting of different classes with the same heterogeneity variance only within each cluster. The choice of the classes is to be made on the basis of the classical or restricted likelihood. We discuss the properties of the corresponding procedures which indicate the studies whose heterogeneity effect is to be enlarged. The conditions for the convergence of several iterative algorithms are given. PubDate: 2020-10-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In this paper, we consider the problem of censored Gamma regression when the censoring status is missing at random. Three estimation methods are investigated. They consist in solving a censored maximum likelihood estimating equation where missing data are replaced by values adjusted using either regression calibration or multiple imputation or inverse probability weights. We show that the resulting estimates are consistent and asymptotically normal. Moreover, while asymptotic variances in missing data problems are generally estimated empirically (using Rubin’s rules for example), we propose closed-form consistent variances estimates based on explicit formulas for the asymptotic variances of the proposed estimates. A simulation study is conducted to assess finite-sample properties of the proposed parameters and asymptotic variances estimates. PubDate: 2020-10-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Recently, a time-dependent measure of divergence has been introduced by Mansourvar and Asadi (2020) to assess the discrepancy between the survival functions of two residual lifetime random variables. In this paper, we derive various time-dependent results on the proposed divergence measure in connection to other well-known measures in reliability engineering. The proposed criterion is also examined in mixture models and a general class of survival transformation models which results in some well-known models in the lifetime studies and survival analysis. In addition, the time-dependent measure is employed to evaluate the divergence between the lifetime distributions of \(k\) -out-of- \(n\) systems and also to assess the discrepancy between the distribution functions of the epoch times of a non-homogeneous Poisson process. PubDate: 2020-07-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The concept of extended neighboring order statistics introduced in Asadi et al. (2001) is a general model containing models of ordered random variables that are included in the generalized order statistics. This model also includes several models of ordered random variables that are not included in the generalized order statistics and is a helpful tool in unifying characterization results from several models of ordered random variables. In this paper, some general classes of distributions with many applications in reliability analysis and engineering, such as negative exponential, inverse exponential, Pareto, negative Pareto, inverse Pareto, power function, negative power, beta of the first kind, rectangular, Cauchy, Raleigh, Lomax, etc., have been characterized by using the regression of extended neighboring order statistics and decreasingly ordered random variables. PubDate: 2020-07-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This work studies the problem of binary classification with the F-score as the performance measure. We propose a post-processing algorithm for this problem which fits a threshold for any score base classifier to yield high F-score. The post-processing step involves only unlabeled data and can be performed in logarithmic time. We derive a general finite sample post-processing bound for the proposed procedure and show that the procedure is minimax rate optimal, when the underlying distribution satisfies classical nonparametric assumptions. This result improves upon previously known rates for the F-score classification and bridges the gap between standard classification risk and the F-score. Finally, we discuss the generalization of this approach to the set-valued classification. PubDate: 2020-04-01 DOI: 10.3103/S1066530720020027
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Given observations from a circular random variable contaminated by an additive measurement error, we consider the problem of minimax optimal goodness-of-fit testing in a non-asymptotic framework. We propose direct and indirect testing procedures using a projection approach. The structure of the optimal tests depends on regularity and ill-posedness parameters of the model, which are unknown in practice. Therefore, adaptive testing strategies that perform optimally over a wide range of regularity and ill-posedness classes simultaneously are investigated. Considering a multiple testing procedure, we obtain adaptive i.e. assumption-free procedures and analyse their performance. Compared with the non-adaptive tests, their radii of testing face a deterioration by a log-factor. We show that for testing of uniformity this loss is unavoidable by providing a lower bound. The results are illustrated considering Sobolev spaces and ordinary or super smooth error densities. PubDate: 2020-04-01 DOI: 10.3103/S1066530720020039
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In this paper, we consider the problem of estimating the \(d\) -th order derivative \(f^{(d)}\) of a density \(f\) , relying on a sample of \(n\) i.i.d. observations \(X_{1},\dots,X_{n}\) with density \(f\) supported on \({\mathbb{R}}\) or \({\mathbb{R}}^{+}\) . We propose projection estimators defined in the orthonormal Hermite or Laguerre bases and study their integrated \({\mathbb{L}}^{2}\) -risk. For the density \(f\) belonging to regularity spaces and for a projection space chosen with adequate dimension, we obtain rates of convergence for our estimators, which are optimal in the minimax sense. The optimal choice of the projection space depends on unknown parameters, so a general data-driven procedure is proposed to reach the bias-variance compromise automatically. We discuss the assumptions and the estimator is compared to the one obtained by simply differentiating the density estimator. Simulations are finally performed. They illustrate the good performances of the procedure and provide numerical comparison of projection and kernel estimators PubDate: 2020-01-01 DOI: 10.3103/S1066530720010020
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In this paper, we develop Bayes and maximum a posteriori probability (MAP) approaches to monotonicity testing. In order to simplify this problem, we consider a simple white Gaussian noise model and with the help of the Haar transform we reduce it to the equivalent problem of testing positivity of the Haar coefficients. This approach permits, in particular, to understand links between monotonicity testing and sparse vectors detection, to construct new tests, and to prove their optimality without supplementary assumptions. The main idea in our construction of multi-level tests is based on some invariance properties of specific probability distributions. Along with Bayes and MAP tests, we construct also adaptive multi-level tests that are free from the prior information about the sizes of non-monotonicity segments of the function. PubDate: 2020-01-01 DOI: 10.3103/S1066530720010032
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In the regression model \(Y=b(X)+\sigma(X)\varepsilon\) , where \(X\) has a density \(f\) , this paper deals with an oracle inequality for an estimator of \(bf\) , involving a kernel in the sense of Lerasle et al. [13], selected via the PCO method. In addition to the bandwidth selection for kernel-based estimators already studied in Lacour et al. [12] and Comte and Marie [3], the dimension selection for anisotropic projection estimators of \(f\) and \(bf\) is covered. PubDate: 2020-01-01 DOI: 10.3103/S1066530720010044
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In this note, we provide upper bounds on the expectation of the supremum of empirical processes indexed by Hölder classes of any smoothness and for any distribution supported on a bounded set in \(\mathbb{R}^{d}\) . These results can alternatively be seen as non-asymptotic risk bounds, when the unknown distribution is estimated by its empirical counterpart, based on \(n\) independent observations, and the error of estimation is quantified by integral probability metrics (IPM). In particular, IPM indexed by Hölder classes are considered and the corresponding rates are derived. These results interpolate between two well-known extreme cases: the rate \(n^{-1/d}\) corresponding to the Wassertein-1 distance (the least smooth case) and the fast rate \(n^{-1/2}\) corresponding to very smooth functions (for instance, functions from a RKHS defined by a bounded kernel). PubDate: 2020-01-01 DOI: 10.3103/S1066530720010056
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The consistency of the Aalen—Johansen-derived estimator of state occupation probabilities in non-Markov multi-state settings is studied and established via a new route. This new route is based on interval functions and relies on a close connection between additive and multiplicative transforms of interval functions, which is established. Under certain assumptions, the consistency follows from explicit expressions of the additive and multiplicative transforms related to the transition probabilities as interval functions, which are obtained, in combination with certain censoring and positivity assumptions PubDate: 2019-10-01 DOI: 10.3103/S1066530719040033
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract For a multinormal distribution with a p-dimensional mean vector θ and an arbitrary unknown dispersion matrix Σ, Rao ([8], [9]) proposed two tests for the problem of testing H0: θ1 = 0, θ2 = 0, Σ unspecified, versus H1: θ1 ≠ 0, θ2 = 0, Σ unspecified. These tests are known as Rao’s W-test and Rao’s U-test, respectively. In this paper, it is shown that Rao’s U-test is admissible while Hotelling’s T2-test is inadmissible. PubDate: 2019-10-01 DOI: 10.3103/S106653071904001X
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper we consider the problem of non-parametric relative regression for twice censored data. We introduce and study a new estimate of the regression function when it is appropriate to assess performance in terms of mean squared relative error of prediction. We establish the uniform consistency with rate over a compact set and asymptotic normality of the estimator suitably normalized. The asymptotic variance is explicitly given. A Monte Carlo study is carried out to evaluate the performance of this estimate. PubDate: 2019-10-01 DOI: 10.3103/S1066530719040045
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract It is shown that for any correlation-parametrized model of dependence and any given significance level α ∈ (0, 1), there is an asymptotically optimal transform of Pearson’s correlation statistic R, for which the generally leading error term for the normal approximation vanishes for all values ρ ∈ (−1, 1) of the correlation coefficient. This general result is then applied to the bivariate normal (BVN) model of dependence and to what is referred to in this paper as the SquareV model. In the BVN model, Pearson’s R turns out to be asymptotically optimal for a rather unusual significance level α ≈ 0.240, whereas Fisher’s transform RF of R is asymptotically optimal for the limit significance level α = 0. In the SquareV model, Pearson’s R is asymptotically optimal for a still rather high significance level α ≈ 0.159, whereas Fisher’s transform RF of R is not asymptotically optimal for any α ∈ [0, 1]. Moreover, it is shown that in both the BVN model and the SquareV model, the transform optimal for a given value of α is in fact asymptotically better than R and RF in wide ranges of values of the significance level, including α itself. Extensive computer simulations for the BVN and SquareV models of dependence suggest that, for sample sizes n ≥ 100 and significance levels α ∈ {0.01, 0.05}, the mentioned asymptotically optimal transform of R generally outperforms both Pearson’s R and Fisher’s transform RF of R, the latter appearing generally much inferior to both R and the asymptotically optimal transform of R in the SquareV model. PubDate: 2019-10-01 DOI: 10.3103/S1066530719040057
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Van Zwet (1964) [16] introduced the convex transformation order between two distribution functions F and G, defined by F ≤cG if G−1 ∘ F is convex. A distribution which precedes G in this order should be seen as less right-skewed than G. Consequently, if F ≤cG, any reasonable measure of skewness should be smaller for F than for G. This property is the key property when defining any skewness measure. In the existing literature, the treatment of the convex transformation order is restricted to the class of differentiable distribution functions with positive density on the support of F. It is the aim of this work to analyze this order in more detail. We show that several of the most well known skewness measures satisfy the key property mentioned above with very weak or no assumptions on the underlying distributions. In doing so, we conversely explore what restrictions are imposed on the underlying distributions by the requirement that F precedes G in convex transformation order. PubDate: 2019-10-01 DOI: 10.3103/S1066530719040021
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper extends the successful maxiset paradigm from function estimation to signal detection in inverse problems. In this context, the maxisets do not have the same shape compared to the classical estimation framework. Nevertheless, we introduce a robust version of these maxisets allowing to exhibit tail conditions on the signals of interest. Under this novel paradigm we are able to compare direct and indirect testing procedures. PubDate: 2019-07-01 DOI: 10.3103/S1066530719030037
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper we are concerned with the weak convergence to Gaussian processes of conditional empirical processes and conditional U-processes from stationary β-mixing sequences indexed by classes of functions satisfying some entropy conditions. We obtain uniform central limit theorems for conditional empirical processes and conditional U-processes when the classes of functions are uniformly bounded or unbounded with envelope functions satisfying some moment conditions. We apply our results to introduce statistical tests for conditional independence that are multivariate conditional versions of the Kendall statistics. PubDate: 2019-07-01 DOI: 10.3103/S1066530719030013
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The present paper studies density deconvolution in the presence of small Berkson errors, in particular, when the variances of the errors tend to zero as the sample size grows. It is known that when the Berkson errors are present, in some cases, the unknown density estimator can be obtained by simple averaging without using kernels. However, this may not be the case when Berkson errors are asymptotically small. By treating the former case as a kernel estimator with the zero bandwidth, we obtain the optimal expressions for the bandwidth. We show that the density of Berkson errors acts as a regularizer, so that the kernel estimator is unnecessary when the variance of Berkson errors lies above some threshold that depends on the shapes of the densities in the model and the number of observations. PubDate: 2019-07-01 DOI: 10.3103/S1066530719030025
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper we investigate an indirect regression model characterized by the Radon transformation. This model is useful for recovery of medical images obtained by computed tomography scans. The indirect regression function is estimated using a series estimator motivated by a spectral cutoff technique. Further, we investigate the empirical process of residuals from this regression, and show that it satisfies a functional central limit theorem. PubDate: 2019-04-01 DOI: 10.3103/S1066530719020029