Mathematical Methods of Statistics
Journal Prestige (SJR): 0.43 Number of Followers: 4 Hybrid journal (It can contain Open Access articles) ISSN (Print) 19348045  ISSN (Online) 10665307 Published by SpringerVerlag [2467 journals] 
 Information Generating Function of Record Values

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: In the present work, we study the information generating (IG) function of record values and examine some main properties of it. We establish some comparison results associated with the IG measure of record values. We show that under equality of two given IG measures of upper record values, the corresponding parent distributions can be determined uniquely. We also present some bounds for the IG measure of upper record values based on upper records of a standard exponential distribution. Further, we provide some results associated with characterization of exponential distribution by maximization (minimization) of IG function of record values under some conditions. We also examine the relative information generating (RIG) measure between the distribution of records values and the corresponding underlying distribution and present some results in this regard. To illustrate the results, several examples have been presented through the paper.
PubDate: 20220901

 Statistical Inference in a ZeroInflated Bell Regression Model

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: In this paper, we study the asymptotic properties of the Maximum Likelihood Estimator (MLE) for a ZeroInflated Bell regression model. Under some regularity conditions, we establish that the estimator is consistent and asymptotically normal. This lends a substantial support to the empirical findings that have already been obtained by some authors. Monte Carlo simulations are conducted to numerically illustrate the main results. The model is applied to a dataset of healthcare demand in USA.
PubDate: 20220901

 Robbins–Monro Algorithm with $$\boldsymbol{\psi}$$ Mixing Random
Errors
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: In this work, we first establish exponential inequalities for the Robbins–Monro’s algorithm under \(\psi\) mixing random errors. Then, we present a numerical application that uses the main result of this work to approximate the theoretical solution of the objective function.
PubDate: 20220901

 Jensen’s Inequality Connected with a Double Random Good

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: In this paper, we define a multiple random good of order \(2\) denoted by \(X_{12}\) whose possible values are of a monetary nature. A tworisky asset portfolio is a multiple random good of order \(2\) . It is firstly possible to establish its expected return by using a linear and quadratic metric. We secondly establish the expected return on \(X_{12}\) denoted by \(\mathbf{P}(X_{12})\) by using a multilinear and quadratic metric. An extension of the notion of mathematical expectation of \(X_{12}\) is carried out by using the notion of \(\alpha\) norm of an antisymmetric tensor of order \(2\) . An extension of the notion of variance of \(X_{12}\) denoted by \(\textrm{Var}(X_{12})\) is shown by using the notion of \(\alpha\) norm of an antisymmetric tensor of order \(2\) based on changes of origin. An extension of the notion of expected utility connected with \(X_{12}\) is considered. An extension of Jensen’s inequality is shown as well. We focus on how the decisionmaker maximizes the expected utility connected with multiple random goods of order \(2\) being chosen by her under conditions of uncertainty and riskiness.
PubDate: 20220601
DOI: 10.3103/S1066530722020028

 Bounds on the Expectations of $$\boldsymbol{L}$$ Statistics Based on iid
Life Distributions
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: We consider the order statistics based on independent identically distributed nonnegative random variables. We determine sharp upper bounds on the expectations of arbitrary linear combinations of order statistics, expressed in the scale units being the \(p\) th roots of \(p\) th raw moments of original variables for various \(p\geq 1\) . The bounds are more precisely described for the single order statistics and spacings. The lower bounds are concluded from the upper ones.
PubDate: 20220601
DOI: 10.3103/S1066530722020041

 Varentropy of Past Lifetimes

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: In a variety of applicative fields the level of information in random quantities is commonly measured by means of the Shannon Entropy. In particular, in reliability theory and survival analysis, timedependent generalizations of this measure of uncertainty have been considered to dynamically describe changes in the degree of information over time. The Residual Entropy and the Residual Varentropy, for example, have been considered in the specialized literature to measure the information and its variability in residual lifetimes. In a similar way, one can consider dynamic measures of information for past lifetimes, i.e., for random lifetimes of items when one assumes that their failures occur before a fixed inspection time. This paper provides a study of the Past Varentropy, defined as the dynamic measure of variability of information for past lifetimes. From this study emerges the interest on a particular family of lifetimes distributions, whose members satisfy the property to be the only ones having constant Past Varentropy.
PubDate: 20220601
DOI: 10.3103/S106653072202003X

 Matrix Variate Distribution Theory under Elliptical Modelsâ€”V: The
NonCentral Wishart and Inverted Wishart Distributions
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: The noncentral Wishart and inverted Wishart distributions are studied in this work under elliptical models; some distributional results are based on some generalizations of the wellknown Kummer relations, which leds us to determine that some moments have a polynomial representation. Then the noncentral \(F\) and ‘‘studentized Wishart’’ distributions are derived in a general setting. After some generalizations, including the so called noncentral generalized inverted Wishart distribution, the classical results based on Gaussian models are derived here as corollaries.
PubDate: 20220301
DOI: 10.3103/S1066530722010021

 DOptimal Designs for the Mitscherlich NonLinear Regression Function

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Mitscherlich’s function is a wellknown threeparameter nonlinear regression function that quantifies the relation between a stimulus or a time variable and a response. It has many applications, in particular in the field of measurement reliability. Optimal designs for estimation of this function have been constructed only for normally distributed responses with homoscedastic variances. In this paper we generalize this literature to Doptimal designs for discrete and continuous responses having their distribution function in the exponential family. We also demonstrate that our Doptimal designs can be identical to and different from optimal designs for variance weighted linear regression.
PubDate: 20220301
DOI: 10.3103/S1066530722010033

 Bounds on the Expectations of $$\boldsymbol{L}$$ statistics from Iid
Symmetric Populations in Various Scale Units
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: We consider the order statistics \(X_{1:n},\ldots,X_{n:n}\) based on independent identically symmetrically distributed random variables. We determine sharp upper bounds in the properly centered linear combinations of order statistics \(\sum_{i=1}^{n}c_{i}(X_{i:n}\mu)\) , where \((c_{1},\ldots,c_{n})\) is an arbitrary vector of coefficients from the \(n\) dimensional real space, and \(\mu\) is the symmetry center of the parent distribution, in various scale units. The scale units are constructed on the basis of absolute central moments of the parent distribution of various orders. The bounds are specified for single order statistics. The lower bounds are immediately concluded from the upper ones.
PubDate: 20210701
DOI: 10.3103/S1066530721030030

 A Necessary Bayesian Nonparametric Test for Assessing Multivariate
Normality
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: A novel Bayesian nonparametric test for assessing multivariate normal models is presented. Although there are extensive frequentist and graphical methods for testing multivariate normality, it is challenging to find Bayesian counterparts. The approach considered in this paper is based on the Dirichlet process and the squared radii of observations. Specifically, the squared radii are employed to transform the \(m\) variate problem into a univariate problem by relying on the fact that if a random sample is coming from a multivariate normal distribution then the square radii follow a particular beta distribution. While the Dirichlet process is used as a prior on the distribution of the square radii, the concentration of the distribution of the Anderson–Darling distance between the posterior process and the beta distribution is compared to that between the prior process and beta distribution via a relative belief ratio. Key results of the approach are derived. The procedure is illustrated through several examples, in which it shows excellent performance.
PubDate: 20210701
DOI: 10.3103/S1066530721030029

 Inferential Results for a New Inequality Curve

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: We propose inferential results for a new integrated inequality curve, related to a new index of inequality and specifically designed for capturing significant shifts in the lower and upper tails of income distributions. In the last decades, indeed, substantial changes mainly occurred in the opposite sides of income distributions, raising serious concern to policy makers. These phenomena has been observed in countries like US, Germany, UK, and France. Properties of the index and curve have been investigated, and applications to real data disclosed a new way to look at inequality. First inferential results for the index have been published, as well. It seems natural, now, to be interested also in inferential results for the integrated curve. To fill this gap in the literature, we introduce two empirical estimators for the integrated curve, and show their asymptotical equivalence. Afterwards, we state their consistency. Finally, we prove the weak convergence in the space \(C[0,1]\) of the corresponding empirical process to a Gaussian process, which is a linear transformation of a Brownian bridge. An analysis of real data from the Bank of Italy Survey of Income and Wealth is also presented, on the base of the obtained inferential results.
PubDate: 20210101
DOI: 10.3103/S1066530721010026

 Local Dvoretzky–Kiefer–Wolfowitz Confidence Bands

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: In this paper, we revisit the concentration inequalities for the supremum of the cumulative distribution function (CDF) of a realvalued continuous distribution as established by Dvoretzky, Kiefer, Wolfowitz and revisited later by Massart in in two seminal papers. We focus on the concentration of the local supremum over a subinterval, rather than on the full domain. That is, denoting \(U\) the CDF of the uniform distribution over \([0,1]\) and \(U_{n}\) its empirical version built from \(n\) samples, we study \(\mathbb{P}\Big{(}\sup_{u\in[\underline{u},\overline{u}]}U_{n}(u)U(u)>\varepsilon\Big{)}\) for different values of \(\underline{u},\overline{u}\in[0,1]\) . Such local controls naturally appear for instance when studying estimation error of spectral riskmeasures (such as the conditional value at risk), where \([\underline{u},\overline{u}]\) is typically \([0,\alpha]\) or \([1\alpha,1]\) for a risk level \(\alpha\) , after reshaping the CDF \(F\) of the considered distribution into \(U\) by the general inverse transform \(F^{1}\) . Extending a proof technique from Smirnov, we provide exact expressions of the local quantities \(\mathbb{P}\Big{(}\sup_{u\in[\underline{u},\overline{u}]}U_{n}(u)U(u)>\varepsilon\Big{)}\) and \(\mathbb{P}\Big{(}\sup_{u\in[\underline{u},\overline{u}]}U(u)U_{n}(u)>\varepsilon\Big{)}\) for each \(n,\varepsilon,\underline{u},\overline{u}\) . Interestingly these quantities, seen as a function of \(\varepsilon\) , can be easily inverted numerically into functions of the probability level \(\delta\) . Although not explicit, they can be computed and tabulated. We plot such expressions and compare them to the classical bound \(\sqrt{\frac{\ln(1/\delta)}{2n}}\) provided by Massart inequality. We then provide an application of such result to the control of generic functional of the CDF, motivated by the case of the conditional value at risk. Last, we extend the local concentration results holding individually for each \(n\) to timeuniform concentration inequalities holding simultaneously for all \(n\) , revisiting a reflection inequality by James, which is of independent interest for the study of sequential decision making strategies.
PubDate: 20210101
DOI: 10.3103/S1066530721010038

 Applying the Solution for the First Multiplicity of Types Equation to
Calculate Exact Approximations of the Probability Distributions of
Statistical Values
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: We consider here the use of the solution for the first multiplicity of types equation to compute exact probability distributions of statistical values and their exact approximations. We consider \({\Delta}\) exact distributions as their exact approximations; \({\Delta}\) exact distributions differ from exact distributions by no more than a predetermined, arbitrarily small value \({\Delta}\) . It is shown that the basis for the exact distribution computing method is an enumeration of search area elements for solution of a linear first multiplicity of type equation composed of multiplicity type vectors. Each element represents here the number of occurrences for elements of a certain type (any sign of an alphabet) in the considered sample. It is shown simultaneously, that the method for restricting the search area for solution of the first multiplicity of type equation is applied for calculating exact approximation. We give an expression defining the algorithmic complexity of exact distributions calculated using the first multiplicity solution method which is finite and allows for each value of alphabet power to determine the maximum sample size for which exact distributions can be calculated by the first multiplicity solution method using limited computing power. To estimate the algorithmic complexity of computing the exact approximations, we used the expression obtained for the first time for the number of first multiplicity equation’s solutions with limitation on the values of coordinates of solution vectors. An expression determining algorithmic complexity for computing the exact approximations using the solution method for the first multiplicity equation with the constraint on the values of solution vector coordinates was obtained. The statistic value of maximal frequency is used as a parameter for restricting the solution vector coordinates, the probability of its excess is less than a predetermined, arbitrarily small value \({\Delta}\) . This permits to calculate the exact approximations of the distributions differing from their exact distribution values by no more than a chosen value \({\Delta}\) . Results for calculating the maximum sample sizes for which exact approximations can be computed are given. It is shown that the algorithmic complexity of computing exact distributions by many orders of magnitude exceeds the complexity of computing their exact approximations. It is shown that application of the first multiplicity method for computing exact approximations allows increasing the volume of samples by a factor of two or more for equal values of the alphabet power as compared to computing exact distributions.
PubDate: 20201001
DOI: 10.3103/S1066530720040031

 Selecting an Augmented Random Effects Model

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: There are many collaborative studies where the data are discrepant while uncertainty estimates reported in each study cannot be relied upon. The classical commonly used random effects model explains this phenomenon by additional noise with a constant heterogeneity variance. This assumption may be inadequate especially when the smallest uncertainty values correspond to the cases which are most deviant from the bulk of data. An augmented random effects model for metaanalysis of such studies is offered. It proposes to think about the data as consisting of different classes with the same heterogeneity variance only within each cluster. The choice of the classes is to be made on the basis of the classical or restricted likelihood. We discuss the properties of the corresponding procedures which indicate the studies whose heterogeneity effect is to be enlarged. The conditions for the convergence of several iterative algorithms are given.
PubDate: 20201001
DOI: 10.3103/S1066530720040043

 Censored Gamma Regression with Uncertain Censoring Status

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: In this paper, we consider the problem of censored Gamma regression when the censoring status is missing at random. Three estimation methods are investigated. They consist in solving a censored maximum likelihood estimating equation where missing data are replaced by values adjusted using either regression calibration or multiple imputation or inverse probability weights. We show that the resulting estimates are consistent and asymptotically normal. Moreover, while asymptotic variances in missing data problems are generally estimated empirically (using Rubin’s rules for example), we propose closedform consistent variances estimates based on explicit formulas for the asymptotic variances of the proposed estimates. A simulation study is conducted to assess finitesample properties of the proposed parameters and asymptotic variances estimates.
PubDate: 20201001
DOI: 10.3103/S106653072004002X

 On a Time Dependent Divergence Measure between Two Residual Lifetime
Distributions
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Recently, a timedependent measure of divergence has been introduced by Mansourvar and Asadi (2020) to assess the discrepancy between the survival functions of two residual lifetime random variables. In this paper, we derive various timedependent results on the proposed divergence measure in connection to other wellknown measures in reliability engineering. The proposed criterion is also examined in mixture models and a general class of survival transformation models which results in some wellknown models in the lifetime studies and survival analysis. In addition, the timedependent measure is employed to evaluate the divergence between the lifetime distributions of \(k\) outof \(n\) systems and also to assess the discrepancy between the distribution functions of the epoch times of a nonhomogeneous Poisson process.
PubDate: 20200701
DOI: 10.3103/S1066530720030023

 On Some Models of Ordered Random Variables and Characterizations of
Distributions
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: The concept of extended neighboring order statistics introduced in Asadi et al. (2001) is a general model containing models of ordered random variables that are included in the generalized order statistics. This model also includes several models of ordered random variables that are not included in the generalized order statistics and is a helpful tool in unifying characterization results from several models of ordered random variables. In this paper, some general classes of distributions with many applications in reliability analysis and engineering, such as negative exponential, inverse exponential, Pareto, negative Pareto, inverse Pareto, power function, negative power, beta of the first kind, rectangular, Cauchy, Raleigh, Lomax, etc., have been characterized by using the regression of extended neighboring order statistics and decreasingly ordered random variables.
PubDate: 20200701
DOI: 10.3103/S1066530720030035

 Optimal Rates for Nonparametric FScore Binary Classification via
PostProcessing
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: This work studies the problem of binary classification with the Fscore as the performance measure. We propose a postprocessing algorithm for this problem which fits a threshold for any score base classifier to yield high Fscore. The postprocessing step involves only unlabeled data and can be performed in logarithmic time. We derive a general finite sample postprocessing bound for the proposed procedure and show that the procedure is minimax rate optimal, when the underlying distribution satisfies classical nonparametric assumptions. This result improves upon previously known rates for the Fscore classification and bridges the gap between standard classification risk and the Fscore. Finally, we discuss the generalization of this approach to the setvalued classification.
PubDate: 20200401
DOI: 10.3103/S1066530720020027

 Adaptive Minimax Testing for Circular Convolution

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Given observations from a circular random variable contaminated by an additive measurement error, we consider the problem of minimax optimal goodnessoffit testing in a nonasymptotic framework. We propose direct and indirect testing procedures using a projection approach. The structure of the optimal tests depends on regularity and illposedness parameters of the model, which are unknown in practice. Therefore, adaptive testing strategies that perform optimally over a wide range of regularity and illposedness classes simultaneously are investigated. Considering a multiple testing procedure, we obtain adaptive i.e. assumptionfree procedures and analyse their performance. Compared with the nonadaptive tests, their radii of testing face a deterioration by a logfactor. We show that for testing of uniformity this loss is unavoidable by providing a lower bound. The results are illustrated considering Sobolev spaces and ordinary or super smooth error densities.
PubDate: 20200401
DOI: 10.3103/S1066530720020039

 Optimal Adaptive Estimation on $${\mathbb{R}}$$ or $${\mathbb{R}}^{{+}}$$
of the Derivatives of a Density
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: In this paper, we consider the problem of estimating the \(d\) th order derivative \(f^{(d)}\) of a density \(f\) , relying on a sample of \(n\) i.i.d. observations \(X_{1},\dots,X_{n}\) with density \(f\) supported on \({\mathbb{R}}\) or \({\mathbb{R}}^{+}\) . We propose projection estimators defined in the orthonormal Hermite or Laguerre bases and study their integrated \({\mathbb{L}}^{2}\) risk. For the density \(f\) belonging to regularity spaces and for a projection space chosen with adequate dimension, we obtain rates of convergence for our estimators, which are optimal in the minimax sense. The optimal choice of the projection space depends on unknown parameters, so a general datadriven procedure is proposed to reach the biasvariance compromise automatically. We discuss the assumptions and the estimator is compared to the one obtained by simply differentiating the density estimator. Simulations are finally performed. They illustrate the good performances of the procedure and provide numerical comparison of projection and kernel estimators
PubDate: 20200101
DOI: 10.3103/S1066530720010020
