Abstract: In this paper, we focus on isotropic and stationary sphere-cross-time random fields. We first introduce the class of spherical functional autoregressive-moving average processes (SPHARMA), which extend in a natural way the spherical functional autoregressions (SPHAR) recently studied in Caponera and Marinucci (Ann Stat 49(1):346–369, 2021) and Caponera et al. (Stoch Process Appl 137:167–199, 2021); more importantly, we then show that SPHAR and SPHARMA processes of sufficiently large order can be exploited to approximate every isotropic and stationary sphere-cross-time random field, thus generalizing to this infinite-dimensional framework some classical results on real-valued stationary processes. Further characterizations in terms of functional spectral representation theorems and Wold-like decompositions are also established. PubDate: 2021-05-12

Abstract: The literature on Bayesian methods for the analysis of discrete-time semi-Markov processes is sparse. In this paper, we introduce the semi-Markov beta-Stacy process, a stochastic process useful for the Bayesian non-parametric analysis of semi-Markov processes. The semi-Markov beta-Stacy process is conjugate with respect to data generated by a semi-Markov process, a property which makes it easy to obtain probabilistic forecasts. Its predictive distributions are characterized by a reinforced random walk on a system of urns. PubDate: 2021-04-01

Abstract: In this paper we consider an ergodic diffusion process with jumps whose drift coefficient depends on \(\mu \) and volatility coefficient depends on \(\sigma \) , two unknown parameters. We suppose that the process is discretely observed at the instants \((t^n_i)_{i=0,\ldots ,n}\) with \(\Delta _n=\sup _{i=0,\ldots ,n-1} (t^n_{i+1}-t^n_i) \rightarrow 0\) . We introduce an estimator of \(\theta :=(\mu , \sigma )\) , based on a contrast function, which is asymptotically gaussian without requiring any conditions on the rate at which \(\Delta _n \rightarrow 0\) , assuming a finite jump activity. This extends earlier results where a condition on the step discretization was needed (see Gloter et al. in Ann Stat 46(4):1445–1480, 2018; Shimizu and Yoshida in Stat Inference Stoch Process 9(3):227–277, 2006) or where only the estimation of the drift parameter was considered (see Amorino and Gloter in Scand J Stat 47:279–346, 2019. https://doi.org/10.1111/sjos.12406). In general situations, our contrast function is not explicit and in practise one has to resort to some approximation. We propose explicit approximations of the contrast function, such that the estimation of \(\theta \) is feasible under the condition that \(n\Delta _n^k \rightarrow 0\) where \(k>0\) can be arbitrarily large. This extends the results obtained by Kessler (Scand J Stat 24(2):211–229, 1997) in the case of continuous processes. PubDate: 2021-04-01

Abstract: This paper deals with the parametric inference for integrated continuous time signals embedded in an additive Gaussian noise and observed at deterministic discrete instants which are not necessarily equidistant. The unknown parameter is multidimensional and compounded of a signal-of-interest parameter and a variance parameter of the noise. We state the consistency and the minimax efficiency of the maximum likelihood estimator and of the Bayesian estimator when the time of observation tends to infinity and the delays between two consecutive observations tend to 0 or are only bounded. The class of signals in consideration contains among others, almost periodic signals and also non-continuous periodic signals. However the problem of frequency estimation is not considered here. Furthermore, in this paper the signal-plus-noise discretely observed in time model is considered as a particular case of a more general model of independent Gaussian observations forming a triangular array. PubDate: 2021-04-01

Abstract: We discuss estimation problems where a polynomial \(s\rightarrow \sum _{i=0}^\ell \vartheta _i s^i\) with strictly positive leading coefficient is observed under Ornstein–Uhlenbeck noise over a long time interval. We prove local asymptotic normality (LAN) and specify asymptotically efficient estimators. We apply this to the following problem: feeding noise \(dY_t\) into the classical (deterministic) Hodgkin–Huxley model in neuroscience, with \(Y_t=\vartheta t + X_t\) and X some Ornstein–Uhlenbeck process with backdriving force \(\tau \) , we have asymptotically efficient estimators for the pair \((\vartheta ,\tau )\) ; based on observation of the membrane potential up to time n, the estimate for \(\vartheta \) converges at rate \(\sqrt{n^3\,}\) . PubDate: 2021-04-01

Abstract: We consider drift estimation problems for high dimension ergodic diffusion processes in nonparametric setting based on observations at discrete fixed time moments in the case when diffusion coefficients are unknown. To this end on the basis of sequential analysis methods we develop model selection procedures, for which we show non asymptotic sharp oracle inequalities. Through the obtained inequalities we show that the constructed model selection procedures are asymptotically efficient in adaptive setting, i.e. in the case when the model regularity is unknown. For the first time for such problem, we found in the explicit form the celebrated Pinsker constant which provides the sharp lower bound for the minimax squared accuracy normalized with the optimal convergence rate. Then we show that the asymptotic quadratic risk for the model selection procedure asymptotically coincides with the obtained lower bound, i.e this means that the constructed procedure is efficient. Finally, on the basis of the constructed model selection procedures in the framework of the big data models we provide the efficient estimation without using the parameter dimension or any sparse conditions. PubDate: 2021-03-27

Abstract: We are interested in estimating the location of what we call “smooth change-point” from n independent observations of an inhomogeneous Poisson process. The smooth change-point is a transition of the intensity function of the process from one level to another which happens smoothly, but over such a small interval, that its length \(\delta _n\) is considered to be decreasing to 0 as \(n\rightarrow +\infty \) . We show that if \(\delta _n\) goes to zero slower than 1/n, our model is locally asymptotically normal (with a rather unusual rate \(\sqrt{\delta _n/n}\) ), and the maximum likelihood and Bayesian estimators are consistent, asymptotically normal and asymptotically efficient. If, on the contrary, \(\delta _n\) goes to zero faster than 1/n, our model is non-regular and behaves like a change-point model. More precisely, in this case we show that the Bayesian estimators are consistent, converge at rate 1/n, have non-Gaussian limit distributions and are asymptotically efficient. All these results are obtained using the likelihood ratio analysis method of Ibragimov and Khasminskii, which equally yields the convergence of polynomial moments of the considered estimators. However, in order to study the maximum likelihood estimator in the case where \(\delta _n\) goes to zero faster than 1/n, this method cannot be applied using the usual topologies of convergence in functional spaces. So, this study should go through the use of an alternative topology and will be considered in a future work. PubDate: 2021-03-24

Abstract: In this paper we address the problem of estimating the posterior distribution of the static parameters of a continuous-time state space model with discrete-time observations by an algorithm that combines the Kalman filter and a particle filter. The proposed algorithm is semi-recursive and has a two layer structure, in which the outer layer provides the estimation of the posterior distribution of the unknown parameters and the inner layer provides the estimation of the posterior distribution of the state variables. This algorithm has a similar structure as the so-called recursive nested particle filter, but unlike the latter filter, in which both layers use a particle filter, our algorithm introduces a dynamic kernel to sample the parameter particles in the outer layer to obtain a higher convergence speed. Moreover, this algorithm also implements the Kalman filter in the inner layer to reduce the computational time. This algorithm can also be used to estimate the parameters that suddenly change value. We prove that, for a state space model with a certain structure, the estimated posterior distribution of the unknown parameters and the state variables converge to the actual distribution in \(L^p\) with rate of order \({\mathcal {O}}(N^{-\frac{1}{2}}+\varDelta ^{\frac{1}{2}})\) , where N is the number of particles for the parameters in the outer layer and \(\varDelta \) is the maximum time step between two consecutive observations. We present numerical results of the implementation of this algorithm, in particularly we implement this algorithm for affine interest models, possibly with stochastic volatility, although the algorithm can be applied to a much broader class of models. PubDate: 2021-03-03 DOI: 10.1007/s11203-021-09239-3

Abstract: Let \(X=(X_t)_{t\ge 0}\) be a known process and T an unknown random time independent of X. Our goal is to derive the distribution of T based on an iid sample of \(X_T\) . Belomestny and Schoenmakers (Stoch Process Appl 126(7):2092–2122, 2015) propose a solution based the Mellin transform in case where X is a Brownian motion. Applying their technique we construct a non-parametric estimator for the density of T for a self-similar one-dimensional process X. We calculate the minimax convergence rate of our estimator in some examples with a particular focus on Bessel processes where we also show asymptotic normality. PubDate: 2021-03-01 DOI: 10.1007/s11203-020-09234-0

Abstract: In this article, the maximum spacing (MSP) method is extended to continuous time Markov chains and semi-Markov processes and consistency of the MSP estimator is proved. For independent and identically distributed univariate observations the idea behind the MSP method is to approximate the Kullback–Leibler information so that each contribution is bounded from above. Following the same idea, the MSP function in this article is defined as an approximation of the relative entropy rate for semi-Markov processes and continuous time Markov chains. The MSP estimator is defined as the parameter value that maximizes the MSP function. Consistency of the MSP estimator is also studied when the assigned model is incorrect. PubDate: 2021-02-15 DOI: 10.1007/s11203-021-09238-4

Abstract: We provide a rigorous mathematical foundation of the theory for the higher-order asymptotic behavior of the one-dimensional Hawkes process with an exponential kernel. As an important application, we give the second-order asymptotic distribution for the maximum likelihood estimator of the exponential Hawkes process. PubDate: 2021-01-18 DOI: 10.1007/s11203-021-09237-5

Abstract: Let the Ornstein–Uhlenbeck process \((X_t)_{t\ge 0}\) driven by a fractional Brownian motion \(B^{H }\) described by \(dX_t = -\theta X_t dt + \sigma dB_t^{H }\) be observed at discrete time instants \(t_k=kh\) , \(k=0, 1, 2, \ldots , 2n+2 \) . We propose an ergodic type statistical estimator \({\hat{\theta }}_n \) , \({\hat{H}}_n \) and \({\hat{\sigma }}_n \) to estimate all the parameters \(\theta \) , H and \(\sigma \) in the above Ornstein–Uhlenbeck model simultaneously. We prove the strong consistence and the rate of convergence of the estimator. The step size h can be arbitrarily fixed and will not be forced to go zero, which is usually a reality. The tools to use are the generalized moment approach (via ergodic theorem) and the Malliavin calculus. PubDate: 2021-01-13 DOI: 10.1007/s11203-020-09235-z

Abstract: We show asymptotic distributions of the residual process in Ornstein–Uhlenbeck model, when the model is true. These distributions are of Brownian motion and of Brownian bridge, depending on whether we estimate one parameter or two. This leads to seemingly simple asymptotic theory of goodness of fit tests based on this process. However, next we show that the residual process would lead to a deficient testing procedures, unless a transformed form of it is introduced. The transformed process is introduced and their role is explained through connection with what is known for the so called chimeric alternatives in testing problems for samples. PubDate: 2021-01-13 DOI: 10.1007/s11203-020-09233-1

Abstract: We develop a nonparametric technique for the estimation of curve trajectories using HARDI data. For various regions of the brain, we consider the imaging signal process and apply multivariate kernel smoothing techniques to a general function f describing the signal process obtained from the MRI image. At each location in the brain we search for the direction of maximum diffusion on the unit sphere, and then trace the integral curve driven by the vector field to obtain the estimates of curve trajectories. We establish the convergence of the properly normalized curve estimators to a Gaussian process. This method is computationally efficient as with each step of the curve tracing we construct a pointwise confidence ellipsoid region as opposed to exhaustive iterative sampling methods. These curve trajectories are models of axonal fibers whose location and geometry are important in neuroscience. PubDate: 2021-01-11 DOI: 10.1007/s11203-020-09236-y

Abstract: A stochastic hybrid system, also known as a switching diffusion, is a continuous-time Markov process with state space consisting of discrete and continuous parts. We consider parametric estimation of the Q matrix for the discrete state transitions and of the drift coefficient for the diffusion part. First, we derive the likelihood function under the complete observation of a sample path in continuous-time. Then, extending a finite-dimensional filter for hidden Markov models developed by Elliott et al. (Hidden Markov Models, Springer, 1995) to stochastic hybrid systems, we derive the likelihood function and the EM algorithm under a partial observation where the continuous state is monitored continuously in time, while the discrete state is unobserved. PubDate: 2021-01-07 DOI: 10.1007/s11203-020-09231-3

Abstract: Max-stable processes have been expanded to quantify extremal dependence in spatiotemporal data. Due to the interaction between space and time, spatiotemporal data are often complex to analyze. So, characterizing these dependencies is one of the crucial challenges in this field of statistics. This paper suggests a semiparametric inference methodology based on the spatiotemporal F-madogram for estimating the parameters of a space-time max-stable process using gridded data. The performance of the method is investigated through various simulation studies. Finally, we apply our inferential procedure to quantify the extremal behavior of radar rainfall data in a region in the State of Florida. PubDate: 2021-01-07 DOI: 10.1007/s11203-020-09232-2

Abstract: The problem of the minimax testing of the Poisson process intensity \({\mathbf{s}}\) is considered. For a given intensity \({\mathbf{p}}\) and a set \(\mathcal{Q}\) , the minimax testing of the simple hypothesis \(H_{0}: {\mathbf{s}} = {\mathbf{p}}\) against the composite alternative \(H_{1}: {\mathbf{s}} = {\mathbf{q}},\,{\mathbf{q}} \in \mathcal{Q}\) is investigated. The case, when the 1-st kind error probability \(\alpha \) is fixed and we are interested in the minimal possible 2-nd kind error probability \(\beta ({\mathbf{p}},\mathcal{Q})\) , is considered. What is the maximal set \(\mathcal{Q}\) , which can be replaced by an intensity \({\mathbf{q}} \in \mathcal{Q}\) without any loss of testing performance' In the asymptotic case ( \(T\rightarrow \infty \) ) that maximal set \(\mathcal{Q}\) is described. PubDate: 2021-01-06 DOI: 10.1007/s11203-020-09230-4

Abstract: The conditional density of Brownian motion is considered given the max, \(B(t \max )\) , as well as those with additional information: \(B(t close, \max )\) , \(B(t close, \max , \min )\) where the close is the final value: \(B(t=1)=c\) and \(t \in [0,1]\) . The conditional expectation and conditional variance of Brownian motion are evaluated subject to one or more of the statistics: the close (final value), the high (maximum), the low (minimum). Computational results displaying both the expectation and variance in time are presented and compared with the theoretical values. We tabulate the time averaged variance of Brownian motion conditional on knowing various extremal properties of the motion. The final table shows that knowing the high is more useful than knowing the final value among other results. Knowing the open, high, low and close reduces the time averaged variance to \(42\%\) of the value of knowing only the open and close (Brownian bridge). PubDate: 2020-12-04 DOI: 10.1007/s11203-020-09229-x

Abstract: We consider a Gaussian continuous time moving average model \(X(t)=\int _0^t a(t-s)dW(s)\) where W is a standard Brownian motion and a(.) a deterministic function locally square integrable on \({{\mathbb {R}}}^+\) . Given N i.i.d. continuous time observations of \((X_i(t))_{t\in [0,T]}\) on [0, T], for \(i=1, \dots , N\) distributed like \((X(t))_{t\in [0,T]}\) , we propose nonparametric projection estimators of \(a^2\) under different sets of assumptions, which authorize or not fractional models. We study the asymptotics in T, N (depending on the setup) ensuring their consistency, provide their nonparametric rates of convergence on functional regularity spaces. Then, we propose a data-driven method corresponding to each setup, for selecting the dimension of the projection space. The findings are illustrated through a simulation study. PubDate: 2020-09-25 DOI: 10.1007/s11203-020-09228-y

Abstract: In this article we introduce and study oscillating Gaussian processes defined by \(X_t = \alpha _+ Y_t \mathbf{1}_{Y_t >0} + \alpha _- Y_t\mathbf{1}_{Y_t<0}\), where \(\alpha _+,\alpha _->0\) are free parameters and Y is either stationary or self-similar Gaussian process. We study the basic properties of X and we consider estimation of the model parameters. In particular, we show that the moment estimators converge in \(L^p\) and are, when suitably normalised, asymptotically normal. PubDate: 2020-04-29 DOI: 10.1007/s11203-020-09212-6