Abstract: Abstract The consistency of the Aalen—Johansen-derived estimator of state occupation probabilities in non-Markov multi-state settings is studied and established via a new route. This new route is based on interval functions and relies on a close connection between additive and multiplicative transforms of interval functions, which is established. Under certain assumptions, the consistency follows from explicit expressions of the additive and multiplicative transforms related to the transition probabilities as interval functions, which are obtained, in combination with certain censoring and positivity assumptions PubDate: 2019-10-01
Abstract: Abstract For a multinormal distribution with a p-dimensional mean vector θ and an arbitrary unknown dispersion matrix Σ, Rao ([8], [9]) proposed two tests for the problem of testing H0: θ1 = 0, θ2 = 0, Σ unspecified, versus H1: θ1 ≠ 0, θ2 = 0, Σ unspecified. These tests are known as Rao’s W-test and Rao’s U-test, respectively. In this paper, it is shown that Rao’s U-test is admissible while Hotelling’s T2-test is inadmissible. PubDate: 2019-10-01
Abstract: Abstract In this paper we consider the problem of non-parametric relative regression for twice censored data. We introduce and study a new estimate of the regression function when it is appropriate to assess performance in terms of mean squared relative error of prediction. We establish the uniform consistency with rate over a compact set and asymptotic normality of the estimator suitably normalized. The asymptotic variance is explicitly given. A Monte Carlo study is carried out to evaluate the performance of this estimate. PubDate: 2019-10-01
Abstract: Abstract It is shown that for any correlation-parametrized model of dependence and any given significance level α ∈ (0, 1), there is an asymptotically optimal transform of Pearson’s correlation statistic R, for which the generally leading error term for the normal approximation vanishes for all values ρ ∈ (−1, 1) of the correlation coefficient. This general result is then applied to the bivariate normal (BVN) model of dependence and to what is referred to in this paper as the SquareV model. In the BVN model, Pearson’s R turns out to be asymptotically optimal for a rather unusual significance level α ≈ 0.240, whereas Fisher’s transform RF of R is asymptotically optimal for the limit significance level α = 0. In the SquareV model, Pearson’s R is asymptotically optimal for a still rather high significance level α ≈ 0.159, whereas Fisher’s transform RF of R is not asymptotically optimal for any α ∈ [0, 1]. Moreover, it is shown that in both the BVN model and the SquareV model, the transform optimal for a given value of α is in fact asymptotically better than R and RF in wide ranges of values of the significance level, including α itself. Extensive computer simulations for the BVN and SquareV models of dependence suggest that, for sample sizes n ≥ 100 and significance levels α ∈ {0.01, 0.05}, the mentioned asymptotically optimal transform of R generally outperforms both Pearson’s R and Fisher’s transform RF of R, the latter appearing generally much inferior to both R and the asymptotically optimal transform of R in the SquareV model. PubDate: 2019-10-01
Abstract: Abstract Van Zwet (1964) [16] introduced the convex transformation order between two distribution functions F and G, defined by F ≤cG if G−1 ∘ F is convex. A distribution which precedes G in this order should be seen as less right-skewed than G. Consequently, if F ≤cG, any reasonable measure of skewness should be smaller for F than for G. This property is the key property when defining any skewness measure. In the existing literature, the treatment of the convex transformation order is restricted to the class of differentiable distribution functions with positive density on the support of F. It is the aim of this work to analyze this order in more detail. We show that several of the most well known skewness measures satisfy the key property mentioned above with very weak or no assumptions on the underlying distributions. In doing so, we conversely explore what restrictions are imposed on the underlying distributions by the requirement that F precedes G in convex transformation order. PubDate: 2019-10-01
Abstract: Abstract This paper extends the successful maxiset paradigm from function estimation to signal detection in inverse problems. In this context, the maxisets do not have the same shape compared to the classical estimation framework. Nevertheless, we introduce a robust version of these maxisets allowing to exhibit tail conditions on the signals of interest. Under this novel paradigm we are able to compare direct and indirect testing procedures. PubDate: 2019-07-01
Abstract: Abstract In this paper we are concerned with the weak convergence to Gaussian processes of conditional empirical processes and conditional U-processes from stationary β-mixing sequences indexed by classes of functions satisfying some entropy conditions. We obtain uniform central limit theorems for conditional empirical processes and conditional U-processes when the classes of functions are uniformly bounded or unbounded with envelope functions satisfying some moment conditions. We apply our results to introduce statistical tests for conditional independence that are multivariate conditional versions of the Kendall statistics. PubDate: 2019-07-01
Abstract: Abstract The present paper studies density deconvolution in the presence of small Berkson errors, in particular, when the variances of the errors tend to zero as the sample size grows. It is known that when the Berkson errors are present, in some cases, the unknown density estimator can be obtained by simple averaging without using kernels. However, this may not be the case when Berkson errors are asymptotically small. By treating the former case as a kernel estimator with the zero bandwidth, we obtain the optimal expressions for the bandwidth. We show that the density of Berkson errors acts as a regularizer, so that the kernel estimator is unnecessary when the variance of Berkson errors lies above some threshold that depends on the shapes of the densities in the model and the number of observations. PubDate: 2019-07-01
Abstract: Abstract In this paper we investigate an indirect regression model characterized by the Radon transformation. This model is useful for recovery of medical images obtained by computed tomography scans. The indirect regression function is estimated using a series estimator motivated by a spectral cutoff technique. Further, we investigate the empirical process of residuals from this regression, and show that it satisfies a functional central limit theorem. PubDate: 2019-04-01
Abstract: Abstract Based on X ∼ Nd(θ, σ X 2 Id), we study the efficiency of predictive densities under α-divergence loss Lα for estimating the density of Y ∼ Nd(θ, σ Y 2 Id). We identify a large number of cases where improvement on a plug-in density are obtainable by expanding the variance, thus extending earlier findings applicable to Kullback-Leibler loss. The results and proofs are unified with respect to the dimension d, the variances σ X 2 and σ Y 2 , the choice of loss Lα; α ∈ (−1, 1). The findings also apply to a large number of plug-in densities, as well as for restricted parameter spaces with θ ∈ Θ ⊂ ℝd. The theoretical findings are accompanied by various observations, illustrations, and implications dealing for instance with robustness with respect to the model variances and simultaneous dominance with respect to the loss. PubDate: 2019-04-01
Abstract: Abstract We consider a stationary AR(p) model. The autoregression parameters are unknown as well as the distribution of innovations. Based on the residuals from the parameter estimates, an analog of empirical distribution function is defined and the tests of Kolmogorov’s and ω2 type are constructed for testing hypotheses on the distribution of innovations. We obtain the asymptotic power of these tests under local alternatives. PubDate: 2019-04-01
Abstract: Abstract Let X1, X2,... be independent random variables observed sequentially and such that X1,..., Xθ−1 have a common probability density p0, while Xθ, Xθ+1,... are all distributed according to p1 ≠ p0. It is assumed that p0 and p1 are known, but the time change θ ∈ ℤ+ is unknown and the goal is to construct a stopping time τ that detects the change-point θ as soon as possible. The standard approaches to this problem rely essentially on some prior information about θ. For instance, in the Bayes approach, it is assumed that θ is a random variable with a known probability distribution. In the methods related to hypothesis testing, this a priori information is hidden in the so-called average run length. The main goal in this paper is to construct stopping times that are free from a priori information about θ. More formally, we propose an approach to solving approximately the following minimization problem: $$\Delta(\theta;{\tau^\alpha})\rightarrow\min_{\tau^\alpha}\;\;\text{subject}\;\text{to}\;\;\alpha(\theta;{\tau^\alpha})\leq\alpha\;\text{for}\;\text{any}\;\theta\geq1,$$ where α(θ; τ) = Pθ{τ < θ} is the false alarm probability and Δ(θ; τ) = Eθ(τ − θ)+ is the average detection delay computed for a given stopping time τ. In contrast to the standard CUSUM algorithm based on the sequential maximum likelihood test, our approach is related to a multiple hypothesis testing methods and permits, in particular, to construct universal stopping times with nearly Bayes detection delays. PubDate: 2019-04-01
Abstract: Abstract In this article, we propose a new method for analyzing longitudinal data which contain responses that are missing at random. This method consists in solving the generalized estimating equation (GEE) of [8] in which the incomplete responses are replaced by values adjusted using the inverse probability weights proposed in [17]. We show that the root estimator is consistent and asymptotically normal, essentially under the some conditions on the marginal distribution and the surrogate correlation matrix as those presented in [15] in the case of complete data, and under minimal assumptions on the missingness probabilities. This method is applied to a real-life data set taken from [13], which examines the incidence of respiratory disease in a sample of 250 pre-school age Indonesian children which were examined every 3 months for 18 months, using as covariates the age, gender, and vitamin A deficiency. PubDate: 2019-04-01
Abstract: Abstract We consider a stationary linear AR(p) model with contamination (gross errors in the observations). The autoregression parameters are unknown, as well as the distribution of innovations. Based on the residuals from the parameter estimates, an analog of the empirical distribution function is defined and a test of Pearson’s chi-square type is constructed for testing hypotheses on the distribution of innovations. We obtain the asymptotic power of this test under local alternatives and establish its qualitative robustness under the hypothesis and alternatives. PubDate: 2019-01-01
Abstract: Abstract We establish a large deviation approximation for the density of an arbitrary sequence of random vectors, by assuming several assumptions on the normalized cumulant generating function and its derivatives. We give two statistical applications to illustrate the result, the first one dealing with a vector of independent sample variances and the second one with a Gaussian multiple linear regression model. Numerical comparisons are eventually provided for these two examples. PubDate: 2019-01-01
Abstract: Abstract Estimation of the predictive probability function of a negative binomial distribution is addressed under the Kullback—Leibler risk. An identity that relates Bayesian predictive probability estimation to Bayesian point estimation is derived. Such identities are known in the cases of normal and Poisson distributions, and the paper extends the result to the negative binomial case. By using the derived identity, a dominance property of a Bayesian predictive probability is studied when the parameter space is restricted. PubDate: 2019-01-01
Abstract: Abstract We consider the problem of nonparametric density estimation of a random environment from the observation of a single trajectory of a random walk in this environment. We build several density estimators using the beta-moments of this distribution. Then we apply the Goldenschluger-Lepski method to select an estimator satisfying an oracle type inequality. We obtain non-asymptotic bounds for the supremum norm of these estimators that hold when the RWRE is recurrent or transient to the right. A simulation study supports our theoretical findings. PubDate: 2019-01-01
Abstract: Abstract In this work we suppose that the random vector (X, Y) satisfies the regression model Y = m(X) + ϵ, where m(·) belongs to some parametric class { \({m_\beta}(\cdot):\beta \in \mathbb{K}\) } and the error ϵ is independent of the covariate X. The response Y is subject to random right censoring. Using a nonlinear mode regression, a new estimation procedure for the true unknown parameter vector β0is proposed that extends the classical least squares procedure for nonlinear regression. We also establish asymptotic properties for the proposed estimator under assumptions of the error density. We investigate the performance through a simulation study. PubDate: 2019-01-01
Abstract: Abstract The aim of the paper is to show that the presence of one possible type of outliers is not connected to that of heavy tails of the distribution. In contrary, typical situation for outliers appearance is the case of compactly supported distributions. PubDate: 2019-01-01
Authors:B. Levit Pages: 245 - 267 Abstract: Abstract For the Hardy classes of functions analytic in the strip around real axis of a size 2β, an optimal method of cardinal interpolation has been proposed within the framework of Optimal Recovery [12]. Below this method, based on the Jacobi elliptic functions, is shown to be optimal according to the criteria of Nonparametric Regression and Optimal Design. In a stochastic non-asymptotic setting, the maximal mean squared error of the optimal interpolant is evaluated explicitly, for all noise levels away from 0. A pivotal role is played by the interference effect, in which the oscillations exhibited by the interpolant’s bias and variance mutually cancel each other. In the limiting case β → ∞, the optimal interpolant converges to the well-knownNyquist–Shannon cardinal series. PubDate: 2018-10-01 DOI: 10.3103/s1066530718040014 Issue No:Vol. 27, No. 4 (2018)