A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> STATISTICS (Total: 130 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Statistics and Computing
Journal Prestige (SJR): 2.545
Citation Impact (citeScore): 2
Number of Followers: 14  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1573-1375 - ISSN (Online) 0960-3174
Published by Springer-Verlag Homepage  [2467 journals]
  • Moment-based density estimation of confidential micro-data: a
           computational statistics approach

    • Free pre-print version: Loading...

      Abstract: Abstract Providing access to synthetic micro-data in place of confidential data to protect the privacy of participants is common practice. For the synthetic data to be useful for analysis, it is necessary that the density function of the synthetic data closely approximate the confidential data. Hence, accurately estimating the density function based on sample micro-data is important. Existing kernel-based, copula-based, and machine learning methods of joint density estimation may not be viable. Applying the multivariate moments’ problem to sample-based density estimation has long been considered impractical due to the computational complexity and intractability of optimal parameter selection of the density estimate when the true joint density function is unknown. This paper introduces a generalised form of the sample moment-based density estimate, which can be used to estimate joint density functions when only the information of empirical moments is available. We demonstrate optimal parametrisation of the moment-based density estimate based solely on sample data by employing a computational strategy for parameter selection. We compare the performance of the moment-based estimate to that of existing non-parametric and parametric density estimation methods. The results show that using empirical moments can provide a reasonable, robust non-parametric approximation of a joint density function that is comparable to existing non-parametric methods. We provide an example of synthetic data generation from the moment-based density estimate and show that the resulting synthetic data provides a reasonable disclosure-protected alternative for public release.
      PubDate: 2023-01-20
       
  • Modularized Bayesian analyses and cutting feedback in likelihood-free
           inference

    • Free pre-print version: Loading...

      Abstract: Abstract There has been much recent interest in modifying Bayesian inference for misspecified models so that it is useful for specific purposes. One popular modified Bayesian inference method is “cutting feedback” which can be used when the model consists of a number of coupled modules, with only some of the modules being misspecified. Cutting feedback methods represent the full posterior distribution in terms of conditional and sequential components, and then modify some terms in such a representation based on the modular structure for specification or computation of a modified posterior distribution. The main goal of this is to avoid contamination of inferences for parameters of interest by misspecified modules. Computation for cut posterior distributions is challenging, and here we consider cutting feedback for likelihood-free inference based on Gaussian mixture approximations to the joint distribution of parameters and data summary statistics. We exploit the fact that marginal and conditional distributions of a Gaussian mixture are Gaussian mixtures to give explicit approximations to marginal or conditional posterior distributions so that we can easily approximate cut posterior analyses. The mixture approach allows repeated approximation of posterior distributions for different data based on a single mixture fit. This is important for model checks which aid in the decision of whether to “cut”. A semi-modular approach to likelihood-free inference where feedback is partially cut is also developed. The benefits of the method are illustrated on two challenging examples, a collective cell spreading model and a continuous time model for asset returns with jumps.
      PubDate: 2023-01-19
       
  • On randomized sketching algorithms and the Tracy–Widom law

    • Free pre-print version: Loading...

      Abstract: Abstract There is an increasing body of work exploring the integration of random projection into algorithms for numerical linear algebra. The primary motivation is to reduce the overall computational cost of processing large datasets. A suitably chosen random projection can be used to embed the original dataset in a lower-dimensional space such that key properties of the original dataset are retained. These algorithms are often referred to as sketching algorithms, as the projected dataset can be used as a compressed representation of the full dataset. We show that random matrix theory, in particular the Tracy–Widom law, is useful for describing the operating characteristics of sketching algorithms in the tall-data regime when the sample size n is much greater than the number of variables d. Asymptotic large sample results are of particular interest as this is the regime where sketching is most useful for data compression. In particular, we develop asymptotic approximations for the success rate in generating random subspace embeddings and the convergence probability of iterative sketching algorithms. We test a number of sketching algorithms on real large high-dimensional datasets and find that the asymptotic expressions give accurate predictions of the empirical performance.
      PubDate: 2023-01-19
       
  • Moments and random number generation for the truncated elliptical family
           of distributions

    • Free pre-print version: Loading...

      Abstract: Abstract This paper proposes an algorithm to generate random numbers from any member of the truncated multivariate elliptical family of distributions with a strictly decreasing density generating function. Based on the ideas of Neal (Ann stat 31(3):705–767, 2003) and Ho et al. (J Stat Plan Inference 142(1):25–40, 2012), we construct an efficient sampling method by means of a slice sampling algorithm with Gibbs sampler steps. We also provide a faster approach to approximate the first and the second moment for the truncated multivariate elliptical distributions where Monte Carlo integration is used for the truncated partition and explicit expressions for the non-truncated part (Galarza et al., in J Multivar Anal 189(104):944, 2022). Examples and an application to environmental spatial data illustrate its usefulness. Methods are available for free in the new R library relliptical.
      PubDate: 2023-01-04
       
  • Truncated Poisson–Dirichlet approximation for Dirichlet process
           hierarchical models

    • Free pre-print version: Loading...

      Abstract: Abstract The Dirichlet process was introduced by Ferguson in 1973 to use with Bayesian nonparametric inference problems. A lot of work has been done based on the Dirichlet process, making it the most fundamental prior in Bayesian nonparametric statistics. Since the construction of Dirichlet process involves an infinite number of random variables, simulation-based methods are hard to implement, and various finite approximations for the Dirichlet process have been proposed to solve this problem. In this paper, we construct a new random probability measure called the truncated Poisson–Dirichlet process. It sorts the components of a Dirichlet process in descending order according to their random weights, then makes a truncation to obtain a finite approximation for the distribution of the Dirichlet process. Since the approximation is based on a decreasing sequence of random weights, it has a lower truncation error comparing to the existing methods using stick-breaking process. Then we develop a blocked Gibbs sampler based on Hamiltonian Monte Carlo method to explore the posterior of the truncated Poisson–Dirichlet process. This method is illustrated by the normal mean mixture model and Caron–Fox network model. Numerical implementations are provided to demonstrate the effectiveness and performance of our algorithm.
      PubDate: 2023-01-04
       
  • Entropic herding

    • Free pre-print version: Loading...

      Abstract: Abstract Herding is a deterministic algorithm used to generate data points regarded as random samples satisfying input moment conditions. This algorithm is based on a high-dimensional dynamical system and rooted in the maximum entropy principle of statistical inference. We propose an extension, entropic herding, which generates a sequence of distributions instead of points. We derived entropic herding from an optimization problem obtained using the maximum entropy principle. Using the proposed entropic herding algorithm as a framework, we discussed a closer connection between the herding and maximum entropy principle. Specifically, we interpreted the original herding algorithm as a tractable version of the entropic herding, the ideal output distribution of which is mathematically represented. We further discussed how the complex behavior of the herding algorithm contributes to optimization. We argued that the proposed entropic herding algorithm extends the herding to probabilistic modeling. In contrast to the original herding, the entropic herding can generate a smooth distribution such that both efficient probability density calculation and sample generation become possible. To demonstrate the viability of these arguments in this study, numerical experiments were conducted, including a comparison with other conventional methods, on both synthetic and real data.
      PubDate: 2023-01-04
       
  • On proportional volume sampling for experimental design in general spaces

    • Free pre-print version: Loading...

      Abstract: Abstract Optimal design for linear regression is a fundamental task in statistics. For finite design spaces, recent progress has shown that random designs drawn using proportional volume sampling (PVS for short) lead to polynomial-time algorithms with approximation guarantees that outperform i.i.d. sampling. PVS strikes the balance between design nodes that jointly fill the design space, while marginally staying in regions of high mass under the solution of a relaxed convex version of the original problem. In this paper, we examine some of the statistical implications of a new variant of PVS for (possibly Bayesian) optimal design. Using point process machinery, we treat the case of a generic Polish design space. We show that not only are known A-optimality approximation guarantees preserved, but we obtain similar guarantees for D-optimal design that tighten recent results. Moreover, we show that our PVS variant can be sampled in polynomial time. Unfortunately, in spite of its elegance and tractability, we demonstrate on a simple example that the practical implications of general PVS are likely limited. In the second part of the paper, we focus on applications and investigate the use of PVS as a subroutine for stochastic search heuristics. We demonstrate that PVS is a robust addition to the practitioner’s toolbox, especially when the regression functions are nonstandard and the design space, while low-dimensional, has a complicated shape (e.g., nonlinear boundaries, several connected components).
      PubDate: 2022-12-31
       
  • Variable selection using conditional AIC for linear mixed models with
           data-driven transformations

    • Free pre-print version: Loading...

      Abstract: Abstract When data analysts use linear mixed models, they usually encounter two practical problems: (a) the true model is unknown and (b) the Gaussian assumptions of the errors do not hold. While these problems commonly appear together, researchers tend to treat them individually by (a) finding an optimal model based on the conditional Akaike information criterion (cAIC) and (b) applying transformations on the dependent variable. However, the optimal model depends on the transformation and vice versa. In this paper, we aim to solve both problems simultaneously. In particular, we propose an adjusted cAIC by using the Jacobian of the particular transformation such that various model candidates with differently transformed data can be compared. From a computational perspective, we propose a step-wise selection approach based on the introduced adjusted cAIC. Model-based simulations are used to compare the proposed selection approach to alternative approaches. Finally, the introduced approach is applied to Mexican data to estimate poverty and inequality indicators for 81 municipalities.
      PubDate: 2022-12-26
       
  • Structure-based hyperparameter selection with Bayesian optimization in
           multidimensional scaling

    • Free pre-print version: Loading...

      Abstract: Abstract We introduce the structure optimized proximity scaling (STOPS) framework for hyperparameter selection in parametrized multidimensional scaling and extensions (proximity scaling; PS). The selection process for hyperparameters is based on the idea that we want the configuration to show a certain structural quality (c-structuredness). A number of structures and how to measure them are discussed. We combine the structural quality by means of c-structuredness indices with the PS badness-of-fit measure in a multi-objective scalarization approach, yielding the Stoploss objective. Computationally we suggest a profile-type algorithm that first solves the PS problem and then uses Stoploss in an outer step to optimize over the hyperparameters. Bayesian optimization with treed Gaussian processes as a an apt and efficient strategy for carrying out the outer optimization is recommended. This way, hyperparameter tuning for many instances of PS is covered in a single conceptual framework. We illustrate the use of the STOPS framework with three data examples.
      PubDate: 2022-12-26
       
  • Fast and universal estimation of latent variable models using extended
           variational approximations

    • Free pre-print version: Loading...

      Abstract: Abstract Generalized linear latent variable models (GLLVMs) are a class of methods for analyzing multi-response data which has gained considerable popularity in recent years, e.g., in the analysis of multivariate abundance data in ecology. One of the main features of GLLVMs is their capacity to handle a variety of responses types, such as (overdispersed) counts, binomial and (semi-)continuous responses, and proportions data. On the other hand, the inclusion of unobserved latent variables poses a major computational challenge, as the resulting marginal likelihood function involves an intractable integral for non-normally distributed responses. This has spurred research into a number of approximation methods to overcome this integral, with a recent and particularly computationally scalable one being that of variational approximations (VA). However, research into the use of VA for GLLVMs has been hampered by the fact that fully closed-form variational lower bounds have only been obtained for certain combinations of response distributions and link functions. In this article, we propose an extended variational approximations (EVA) approach which widens the set of VA-applicable GLLVMs dramatically. EVA draws inspiration from the underlying idea behind the Laplace approximation: by replacing the complete-data likelihood function with its second order Taylor approximation about the mean of the variational distribution, we can obtain a fully closed-form approximation to the marginal likelihood of the GLLVM for any response type and link function. Through simulation studies and an application to a species community of testate amoebae, we demonstrate how EVA results in a “universal” approach to fitting GLLVMs, which remains competitive in terms of estimation and inferential performance relative to both standard VA (where any intractable integrals are either overcome through reparametrization or quadrature) and a Laplace approximation approach, while being computationally more scalable than both methods in practice.
      PubDate: 2022-12-24
       
  • Parallelized integrated nested Laplace approximations for fast Bayesian
           inference

    • Free pre-print version: Loading...

      Abstract: Abstract There is a growing demand for performing larger-scale Bayesian inference tasks, arising from greater data availability and higher-dimensional model parameter spaces. In this work we present parallelization strategies for the methodology of integrated nested Laplace approximations (INLA), a popular framework for performing approximate Bayesian inference on the class of Latent Gaussian models. Our approach makes use of nested thread-level parallelism, a parallel line search procedure using robust regression in INLA’s optimization phase and the state-of-the-art sparse linear solver PARDISO. We leverage mutually independent function evaluations in the algorithm as well as advanced sparse linear algebra techniques. This way we can flexibly utilize the power of today’s multi-core architectures. We demonstrate the performance of our new parallelization scheme on a number of different real-world applications. The introduction of parallelism leads to speedups of a factor 10 and more for all larger models. Our work is already integrated in the current version of the open-source R-INLA package, making its improved performance conveniently available to all users.
      PubDate: 2022-12-24
       
  • Correction to : Variational inference and sparsity in high-dimensional
           deep Gaussian mixture models

    • Free pre-print version: Loading...

      PubDate: 2022-12-21
       
  • GParareal: a time-parallel ODE solver using Gaussian process emulation

    • Free pre-print version: Loading...

      Abstract: Abstract Sequential numerical methods for integrating initial value problems (IVPs) can be prohibitively expensive when high numerical accuracy is required over the entire interval of integration. One remedy is to integrate in a parallel fashion, “predicting” the solution serially using a cheap (coarse) solver and “correcting” these values using an expensive (fine) solver that runs in parallel on a number of temporal subintervals. In this work, we propose a time-parallel algorithm (GParareal) that solves IVPs by modelling the correction term, i.e. the difference between fine and coarse solutions, using a Gaussian process emulator. This approach compares favourably with the classic parareal algorithm and we demonstrate, on a number of IVPs, that GParareal can converge in fewer iterations than parareal, leading to an increase in parallel speed-up. GParareal also manages to locate solutions to certain IVPs where parareal fails and has the additional advantage of being able to use archives of legacy solutions, e.g. solutions from prior runs of the IVP for different initial conditions, to further accelerate convergence of the method — something that existing time-parallel methods do not do.
      PubDate: 2022-12-21
       
  • Detecting renewal states in chains of variable length via intrinsic Bayes
           factors

    • Free pre-print version: Loading...

      Abstract: Abstract Markov chains with variable length are useful parsimonious stochastic models able to generate most stationary sequence of discrete symbols. The idea is to identify the suffixes of the past, called contexts, that are relevant to predict the future symbol. Sometimes a single state is a context, and looking at the past and finding this specific state makes the further past irrelevant. States with such property are called renewal states and they can be used to split the chain into independent and identically distributed blocks. In order to identify renewal states for chains with variable length, we propose the use of Intrinsic Bayes Factor to evaluate the hypothesis that some particular state is a renewal state. In this case, the difficulty lies in integrating the marginal posterior distribution for the random context trees for general prior distribution on the space of context trees, with Dirichlet prior for the transition probabilities, and Monte Carlo methods are applied. To show the strength of our method, we analyzed artificial datasets generated from different models and one example coming from the field of Linguistics.
      PubDate: 2022-12-21
       
  • Flexible tree-structured regression models for discrete event times

    • Free pre-print version: Loading...

      Abstract: Abstract Discrete hazard models are widely applied for the analysis of time-to-event outcomes that are intrinsically discrete or grouped versions of continuous event times. Commonly, one assumes that the effect of explanatory variables on the hazard can be described by a linear predictor function. This, however, may be not appropriate when non-linear effects or interactions between the explanatory variables occur in the data. To address this issue, we propose a novel class of discrete hazard models that utilizes recursive partitioning techniques and allows to include the effects of explanatory variables in a flexible data-driven way. We introduce a tree-building algorithm that inherently performs variable selection and facilitates the inclusion of non-linear effects and interactions, while the favorable additive form of the predictor function is kept. In a simulation study, the proposed class of models is shown to be competitive with alternative approaches, including a penalized parametric model and Bayesian additive regression trees, in terms of predictive performance and the ability to detect informative variables. The modeling approach is illustrated by two real-world applications analyzing data of patients with odontogenic infection and lymphatic filariasis.
      PubDate: 2022-12-21
       
  • Direct sampling with a step function

    • Free pre-print version: Loading...

      Abstract: Abstract The direct sampling method proposed by Walker et al. (JCGS 2011) can generate draws from weighted distributions possibly having intractable normalizing constants. The method may be of interest as a tool in situations which require drawing from an unfamiliar distribution. However, the original algorithm can have difficulty producing draws in some situations. The present work restricts attention to a univariate setting where the weight function and base distribution of the weighted target density meet certain criteria. Here, a variant of the direct sampler is proposed which uses a step function to approximate the density of a particular augmented random variable on which the method is based. Knots for the step function can be placed strategically to ensure the approximation is close to the underlying density. Variates may then be generated reliably while largely avoiding the need for manual tuning or rejections. A rejection sampler based on the step function allows exact draws to be generated from the target with lower rejection probability in exchange for increased computation. Several applications of the proposed sampler illustrate the method: generating draws from the Conway-Maxwell Poisson distribution, a Gibbs sampler which draws the dependence parameter in a random effects model with conditional autoregression structure, and a Gibbs sampler which draws the degrees-of-freedom parameter in a regression with t-distributed errors.
      PubDate: 2022-12-21
       
  • Variational inference with vine copulas: an efficient approach for
           Bayesian computer model calibration

    • Free pre-print version: Loading...

      Abstract: Abstract With the advancements of computer architectures, the use of computational models proliferates to solve complex problems in many scientific applications such as nuclear physics and climate research. However, the potential of such models is often hindered because they tend to be computationally expensive and consequently ill-fitting for uncertainty quantification. Furthermore, they are usually not calibrated with real-time observations. We develop a computationally efficient algorithm based on variational Bayes inference (VBI) for calibration of computer models with Gaussian processes. Unfortunately, the standard fast-to-compute gradient estimates based on subsampling are biased under the calibration framework due to the conditionally dependent data which diminishes the efficiency of VBI. In this work, we adopt a pairwise decomposition of the data likelihood using vine copulas that separate the information on dependence structure in data from their marginal distributions and leads to computationally efficient gradient estimates that are unbiased and thus scalable calibration. We provide an empirical evidence for the computational scalability of our methodology together with average case analysis and describe all the necessary details for an efficient implementation of the proposed algorithm. We also demonstrate the opportunities given by our method for practitioners on a real data example through calibration of the Liquid Drop Model of nuclear binding energies.
      PubDate: 2022-12-19
       
  • Bayesian parameter inference for partially observed stochastic
           differential equations driven by fractional Brownian motion

    • Free pre-print version: Loading...

      Abstract: Abstract In this paper we consider Bayesian parameter inference for partially observed fractional Brownian motion models. The approach we follow is to time-discretize the hidden process and then to design Markov chain Monte Carlo (MCMC) algorithms to sample from the posterior density on the parameters given data. We rely on a novel representation of the time discretization, which seeks to sample from an approximation of the posterior and then corrects via importance sampling; the approximation reduces the time (in terms of total observation time T) by \(\mathcal {O}(T)\) . This method is extended by using a multilevel MCMC method which can reduce the computational cost to achieve a given mean square error versus using a single time discretization. Our methods are illustrated on simulated and real data.
      PubDate: 2022-12-19
       
  • Practical Hilbert space approximate Bayesian Gaussian processes for
           probabilistic programming

    • Free pre-print version: Loading...

      Abstract: Abstract Gaussian processes are powerful non-parametric probabilistic models for stochastic functions. However, the direct implementation entails a complexity that is computationally intractable when the number of observations is large, especially when estimated with fully Bayesian methods such as Markov chain Monte Carlo. In this paper, we focus on a low-rank approximate Bayesian Gaussian processes, based on a basis function approximation via Laplace eigenfunctions for stationary covariance functions. The main contribution of this paper is a detailed analysis of the performance, and practical recommendations for how to select the number of basis functions and the boundary factor. Intuitive visualizations and recommendations, make it easier for users to improve approximation accuracy and computational performance. We also propose diagnostics for checking that the number of basis functions and the boundary factor are adequate given the data. The approach is simple and exhibits an attractive computational complexity due to its linear structure, and it is easy to implement in probabilistic programming frameworks. Several illustrative examples of the performance and applicability of the method in the probabilistic programming language Stan are presented together with the underlying Stan model code.
      PubDate: 2022-12-14
       
  • Automatic model training under restrictive time constraints

    • Free pre-print version: Loading...

      Abstract: Abstract We develop a hyperparameter optimisation algorithm, Automated Budget Constrained Training, which balances the quality of a model with the computational cost required to tune it. The relationship between hyperparameters, model quality and computational cost must be learnt and this learning is incorporated directly into the optimisation problem. At each training epoch, the algorithm decides whether to terminate or continue training, and, in the latter case, what values of hyperparameters to use. This decision weighs optimally potential improvements in the quality with the additional training time and the uncertainty about the learnt quantities. The performance of our algorithm is verified on a number of machine learning problems encompassing random forests and neural networks. Our approach is rooted in the theory of Markov decision processes with partial information and we develop a numerical method to compute the value function and an optimal strategy.
      PubDate: 2022-12-13
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 35.172.230.154
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-