Subjects -> MATHEMATICS (Total: 1013 journals)
    - APPLIED MATHEMATICS (92 journals)
    - GEOMETRY AND TOPOLOGY (23 journals)
    - MATHEMATICS (714 journals)
    - MATHEMATICS (GENERAL) (45 journals)
    - NUMERICAL ANALYSIS (26 journals)
    - PROBABILITIES AND MATH STATISTICS (113 journals)

PROBABILITIES AND MATH STATISTICS (113 journals)                     

Showing 1 - 86 of 86 Journals sorted alphabetically
Advances in Statistics     Open Access   (Followers: 10)
Afrika Statistika     Open Access   (Followers: 1)
American Journal of Applied Mathematics and Statistics     Open Access   (Followers: 13)
American Journal of Mathematics and Statistics     Open Access   (Followers: 9)
Annals of Data Science     Hybrid Journal   (Followers: 15)
Applied Medical Informatics     Open Access   (Followers: 12)
Asian Journal of Mathematics & Statistics     Open Access   (Followers: 7)
Asian Journal of Probability and Statistics     Open Access  
Austrian Journal of Statistics     Open Access   (Followers: 4)
Biostatistics & Epidemiology     Hybrid Journal   (Followers: 6)
Calcutta Statistical Association Bulletin     Hybrid Journal  
Communications in Mathematics and Statistics     Hybrid Journal   (Followers: 3)
Communications in Statistics - Simulation and Computation     Hybrid Journal   (Followers: 9)
Communications in Statistics: Case Studies, Data Analysis and Applications     Hybrid Journal  
Comunicaciones en Estadística     Open Access  
Econometrics and Statistics     Hybrid Journal   (Followers: 2)
Forecasting     Open Access   (Followers: 1)
Foundations and Trends® in Optimization     Full-text available via subscription   (Followers: 2)
Geoinformatics & Geostatistics     Hybrid Journal   (Followers: 10)
Geomatics, Natural Hazards and Risk     Open Access   (Followers: 14)
Indonesian Journal of Applied Statistics     Open Access  
International Game Theory Review     Hybrid Journal  
International Journal of Advanced Statistics and IT&C for Economics and Life Sciences     Open Access  
International Journal of Advanced Statistics and Probability     Open Access   (Followers: 7)
International Journal of Algebra and Statistics     Open Access   (Followers: 4)
International Journal of Applied Mathematics and Statistics     Full-text available via subscription   (Followers: 4)
International Journal of Ecological Economics and Statistics     Full-text available via subscription   (Followers: 4)
International Journal of Game Theory     Hybrid Journal   (Followers: 3)
International Journal of Mathematics and Statistics     Full-text available via subscription   (Followers: 2)
International Journal of Multivariate Data Analysis     Hybrid Journal  
International Journal of Probability and Statistics     Open Access   (Followers: 3)
International Journal of Statistics & Economics     Full-text available via subscription   (Followers: 6)
International Journal of Statistics and Applications     Open Access   (Followers: 2)
International Journal of Statistics and Probability     Open Access   (Followers: 3)
International Journal of Statistics in Medical Research     Hybrid Journal   (Followers: 2)
International Journal of Testing     Hybrid Journal   (Followers: 1)
Iraqi Journal of Statistical Sciences     Open Access  
Japanese Journal of Statistics and Data Science     Hybrid Journal  
Journal of Biometrics & Biostatistics     Open Access   (Followers: 4)
Journal of Cost Analysis and Parametrics     Hybrid Journal   (Followers: 5)
Journal of Environmental Statistics     Open Access   (Followers: 4)
Journal of Game Theory     Open Access   (Followers: 1)
Journal of Mathematical Economics and Finance     Full-text available via subscription  
Journal of Mathematics and Statistics Studies     Open Access  
Journal of Modern Applied Statistical Methods     Open Access   (Followers: 1)
Journal of Official Statistics     Open Access   (Followers: 2)
Journal of Quantitative Economics     Hybrid Journal  
Journal of Social and Economic Statistics     Open Access   (Followers: 3)
Journal of Statistical Theory and Practice     Hybrid Journal   (Followers: 2)
Journal of Statistics and Data Science Education     Open Access   (Followers: 3)
Journal of Survey Statistics and Methodology     Hybrid Journal   (Followers: 5)
Journal of the Indian Society for Probability and Statistics     Full-text available via subscription  
Jurnal Biometrika dan Kependudukan     Open Access   (Followers: 1)
Lietuvos Statistikos Darbai     Open Access   (Followers: 1)
Mathematics and Statistics     Open Access   (Followers: 3)
Methods, Data, Analyses     Open Access   (Followers: 1)
METRON     Hybrid Journal   (Followers: 2)
Nepalese Journal of Statistics     Open Access   (Followers: 1)
North American Actuarial Journal     Hybrid Journal   (Followers: 2)
Open Journal of Statistics     Open Access   (Followers: 3)
Open Mathematics, Statistics and Probability Journal     Open Access  
Pakistan Journal of Statistics and Operation Research     Open Access   (Followers: 1)
Physica A: Statistical Mechanics and its Applications     Hybrid Journal   (Followers: 7)
Probability, Uncertainty and Quantitative Risk     Open Access   (Followers: 2)
Research & Reviews : Journal of Statistics     Open Access   (Followers: 4)
Revista Brasileira de Biometria     Open Access  
Revista Colombiana de Estadística     Open Access  
RMS : Research in Mathematics & Statistics     Open Access   (Followers: 1)
Sankhya B - Applied and Interdisciplinary Statistics     Hybrid Journal  
SIAM Journal on Mathematics of Data Science     Hybrid Journal   (Followers: 5)
SIAM/ASA Journal on Uncertainty Quantification     Hybrid Journal   (Followers: 3)
Spatial Statistics     Hybrid Journal   (Followers: 2)
Stat     Hybrid Journal   (Followers: 1)
Stata Journal     Full-text available via subscription   (Followers: 10)
Statistica     Open Access   (Followers: 6)
Statistical Analysis and Data Mining     Hybrid Journal   (Followers: 23)
Statistical Theory and Related Fields     Hybrid Journal  
Statistics and Public Policy     Open Access   (Followers: 3)
Statistics in Transition New Series : An International Journal of the Polish Statistical Association     Open Access  
Statistics Research Letters     Open Access   (Followers: 1)
Statistics, Optimization & Information Computing     Open Access   (Followers: 5)
Stats     Open Access  
Theory of Probability and its Applications     Hybrid Journal   (Followers: 2)
Theory of Probability and Mathematical Statistics     Full-text available via subscription   (Followers: 2)
Turkish Journal of Forecasting     Open Access   (Followers: 1)
Zeitschrift für die gesamte Versicherungswissenschaft     Hybrid Journal  

           

Similar Journals
Journal Cover
SIAM Journal on Mathematics of Data Science
Number of Followers: 5  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Online) 2577-0187
Published by Society for Industrial and Applied Mathematics Homepage  [17 journals]
  • Equivariant Neural Networks for Indirect Measurements

    • Free pre-print version: Loading...

      Authors: Matthias Beckmann, Nick Heilenkötter
      Pages: 579 - 601
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 3, Page 579-601, September 2024.
      Abstract.In recent years, deep learning techniques have shown great success in various tasks related to inverse problems, where a target quantity of interest can only be observed through indirect measurements by a forward operator. Common approaches apply deep neural networks in a postprocessing step to the reconstructions obtained by classical reconstruction methods. However, the latter methods can be computationally expensive and introduce artifacts that are not present in the measured data and, in turn, can deteriorate the performance on the given task. To overcome these limitations, we propose a class of equivariant neural networks that can be directly applied to the measurements to solve the desired task. To this end, we build appropriate network structures by developing layers that are equivariant with respect to data transformations induced by well-known symmetries in the domain of the forward operator. We rigorously analyze the relation between the measurement operator and the resulting group representations and prove a representer theorem that characterizes the class of linear operators that translate between a given pair of group actions. Based on this theory, we extend the existing concepts of Lie group equivariant deep learning to inverse problems and introduce new representations that result from the involved measurement operations. This allows us to efficiently solve classification, regression, or even reconstruction tasks based on indirect measurements also for very sparse data problems, where a classical reconstruction-based approach may be hard or even impossible. We illustrate the effectiveness of our approach in numerical experiments and compare with existing methods.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-07-03T07:00:00Z
      DOI: 10.1137/23M1582862
      Issue No: Vol. 6, No. 3 (2024)
       
  • Gradient Descent in the Absence of Global Lipschitz Continuity of the
           Gradients

    • Free pre-print version: Loading...

      Authors: Vivak Patel, Albert S. Berahas
      Pages: 602 - 626
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 3, Page 602-626, September 2024.
      Abstract.Gradient descent (GD) is a collection of continuous optimization methods that have achieved immeasurable success in practice. Owing to data science applications, GD with diminishing step sizes has become a prominent variant. While this variant of GD has been well studied in the literature for objectives with globally Lipschitz continuous gradients or by requiring bounded iterates, objectives from data science problems do not satisfy such assumptions. Thus, in this work, we provide a novel global convergence analysis of GD with diminishing step sizes for differentiable nonconvex functions whose gradients are only locally Lipschitz continuous. Through our analysis, we generalize what is known about gradient descent with diminishing step sizes, including interesting topological facts, and we elucidate the varied behaviors that can occur in the previously overlooked divergence regime. Thus, we provide a general global convergence analysis of GD with diminishing step sizes under realistic conditions for data science problems.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-07-05T07:00:00Z
      DOI: 10.1137/22M1527210
      Issue No: Vol. 6, No. 3 (2024)
       
  • Three-Operator Splitting for Learning to Predict Equilibria in Convex
           Games

    • Free pre-print version: Loading...

      Authors: D. McKenzie, H. Heaton, Q. Li, S. Wu Fung, S. Osher, W. Yin
      Pages: 627 - 648
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 3, Page 627-648, September 2024.
      Abstract.Systems of competing agents can often be modeled as games. Assuming rationality, the most likely outcomes are given by an equilibrium, e.g., a Nash equilibrium. In many practical settings, games are influenced by context, i.e., additional data beyond the control of any agent (e.g., weather for traffic and fiscal policy for market economies). Often, the exact game mechanics are unknown, yet vast amounts of historical data consisting of (context, equilibrium) pairs are available, raising the possibility of learning a solver that predicts the equilibria given only the context. We introduce Nash fixed-point networks (N-FPNs), a class of neural networks that naturally output equilibria. Crucially, N-FPNs employ a constraint decoupling scheme to handle complicated agent action sets while avoiding expensive projections. Empirically, we find that N-FPNs are compatible with the recently developed Jacobian-free backpropagation technique for training implicit networks, making them significantly faster and easier to train than prior models. Our experiments show that N-FPNs are capable of scaling to problems orders of magnitude larger than existing learned game solvers. All code is available online.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-07-11T07:00:00Z
      DOI: 10.1137/22M1544531
      Issue No: Vol. 6, No. 3 (2024)
       
  • ABBA Neural Networks: Coping with Positivity, Expressivity, and Robustness

    • Free pre-print version: Loading...

      Authors: Ana Neacşu, Jean-Christophe Pesquet, Vlad Vasilescu, Corneliu Burileanu
      Pages: 649 - 678
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 3, Page 649-678, September 2024.
      Abstract.We introduce ABBA networks, a novel class of (almost) nonnegative neural networks, which are shown to possess a series of appealing properties. In particular, we demonstrate that these networks are universal approximators while enjoying the advantages of nonnegative weighted networks. We derive tight Lipschitz bounds in both the fully connected and convolutional cases. We propose a strategy for designing ABBA nets that are robust against adversarial attacks, by finely controlling the Lipschitz constant of the network during the training phase. We show that our method outperforms other state-of-the-art defenses against adversarial white-box attackers. Experiments are performed on image classification tasks on four benchmark datasets.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-07-15T07:00:00Z
      DOI: 10.1137/23M1589591
      Issue No: Vol. 6, No. 3 (2024)
       
  • Memory Capacity of Two Layer Neural Networks with Smooth Activations

    • Free pre-print version: Loading...

      Authors: Liam Madden, Christos Thrampoulidis
      Pages: 679 - 702
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 3, Page 679-702, September 2024.
      Abstract.Determining the memory capacity of two layer neural networks with [math] hidden neurons and input dimension [math] (i.e., [math] total trainable parameters), which refers to the largest size of general data the network can memorize, is a fundamental machine learning question. For activations that are real analytic at a point and, if restricting to a polynomial there, have sufficiently high degree, we establish a lower bound of [math] and optimality up to a factor of approximately 2. All practical activations, such as sigmoids, Heaviside, and the rectified linear unit (ReLU), are real analytic at a point. Furthermore, the degree condition is mild, requiring, for example, that [math] if the activation is [math]. Analogous prior results were limited to Heaviside and ReLU activations—our result covers almost everything else. In order to analyze general activations, we derive the precise generic rank of the network’s Jacobian, which can be written in terms of Hadamard powers and the Khatri–Rao product. Our analysis extends classical linear algebraic facts about the rank of Hadamard powers. Overall, our approach differs from prior works on memory capacity and holds promise for extending to deeper models and other architectures.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-07-16T07:00:00Z
      DOI: 10.1137/23M1599355
      Issue No: Vol. 6, No. 3 (2024)
       
  • Spectral Triadic Decompositions of Real-World Networks

    • Free pre-print version: Loading...

      Authors: Sabyasachi Basu, Suman Kalyan Bera, C. Seshadhri
      Pages: 703 - 730
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 3, Page 703-730, September 2024.
      Abstract.A fundamental problem in mathematics and network analysis is to find conditions under which a graph can be partitioned into smaller pieces. A ubiquitous tool for this partitioning is the Fiedler vector or discrete Cheeger inequality. These results relate the graph spectrum (eigenvalues of the normalized adjacency matrix) to the ability to break a graph into two pieces, with few edge deletions. An entire subfield of mathematics, called spectral graph theory, has emerged from these results. Yet these results do not say anything about the rich community structure exhibited by real-world networks, which typically have a significant fraction of edges contained in numerous densely clustered blocks. Inspired by the properties of real-world networks, we discover a new spectral condition that relates eigenvalue powers to a network decomposition into densely clustered blocks. We call this the spectral triadic decomposition. Our relationship exactly predicts the existence of community structure, as commonly seen in real networked data. Our proof provides an efficient algorithm to produce the spectral triadic decomposition. We observe on numerous social, coauthorship, and citation network datasets that these decompositions have significant correlation with semantically meaningful communities.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-08-06T07:00:00Z
      DOI: 10.1137/23M1586926
      Issue No: Vol. 6, No. 3 (2024)
       
  • Efficiency of ETA Prediction

    • Free pre-print version: Loading...

      Authors: Chiwei Yan, James Johndrow, Dawn Woodard, Yanwei Sun
      Pages: 227 - 253
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 2, Page 227-253, June 2024.
      Abstract. Modern mobile applications such as navigation services and ride-sharing platforms rely heavily on geospatial technologies, most critically predictions of the time required for a vehicle to traverse a particular route, or the so-called estimated time of arrival (ETA). There are various methods used in practice, which differ in terms of the geographic granularity at which the predictive model is trained—e.g., segment-based methods predict travel time at the level of road segments (or a combination of several adjacent road segments) and then aggregate across the route, whereas route-based methods use generic information about the trip, such as origin and destination, to predict travel time. Though various forms of these methods have been developed, there has been no rigorous theoretical comparison regarding their accuracies, and empirical studies have, in many cases, drawn opposite conclusions. We provide the first theoretical analysis of the predictive accuracy of various ETA prediction methods and argue that maintaining a segment-level architecture in predicting travel time is often of first-order importance. Our work highlights that the accuracy of ETA prediction is driven not just by the sophistication of the model but also by the spatial granularity at which those methods are applied.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-04-04T07:00:00Z
      DOI: 10.1137/23M155699X
      Issue No: Vol. 6, No. 2 (2024)
       
  • The Sample Complexity of Sparse Multireference Alignment and
           Single-Particle Cryo-Electron Microscopy

    • Free pre-print version: Loading...

      Authors: Tamir Bendory, Dan Edidin
      Pages: 254 - 282
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 2, Page 254-282, June 2024.
      Abstract.Multireference alignment (MRA) is the problem of recovering a signal from its multiple noisy copies, each acted upon by a random group element. MRA is mainly motivated by single-particle cryo–electron microscopy (cryo-EM) that has recently joined X-ray crystallography as one of the two leading technologies to reconstruct biological molecular structures. Previous papers have shown that, in the high-noise regime, the sample complexity of MRA and cryo-EM is [math], where [math] is the number of observations, [math] is the variance of the noise, and [math] is the lowest-order moment of the observations that uniquely determines the signal. In particular, it was shown that, in many cases, [math] for generic signals, and thus, the sample complexity is [math]. In this paper, we analyze the second moment of the MRA and cryo-EM models. First, we show that, in both models, the second moment determines the signal up to a set of unitary matrices whose dimension is governed by the decomposition of the space of signals into irreducible representations of the group. Second, we derive sparsity conditions under which a signal can be recovered from the second moment, implying sample complexity of [math]. Notably, we show that the sample complexity of cryo-EM is [math] if at most one-third of the coefficients representing the molecular structure are nonzero; this bound is near-optimal. The analysis is based on tools from representation theory and algebraic geometry. We also derive bounds on recovering a sparse signal from its power spectrum, which is the main computational problem of X-ray crystallography.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-04-04T07:00:00Z
      DOI: 10.1137/23M155685X
      Issue No: Vol. 6, No. 2 (2024)
       
  • Reversible Gromov–Monge Sampler for Simulation-Based Inference

    • Free pre-print version: Loading...

      Authors: YoonHaeng Hur, Wenxuan Guo, Tengyuan Liang
      Pages: 283 - 310
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 2, Page 283-310, June 2024.
      Abstract. This paper introduces a new simulation-based inference procedure to model and sample from multidimensional probability distributions given access to independent and identically distributed samples, circumventing the usual approaches of explicitly modeling the density function or designing Markov chain Monte Carlo. Motivated by the seminal work on distance and isomorphism between metric measure spaces, we develop a new transform sampler to perform simulation-based inference, which estimates a notion of optimal alignments between two heterogeneous metric measure spaces [math] and [math] from empirical data sets, with estimated maps that approximately push forward one measure [math] to the other [math], and vice versa. We introduce a new notion called the reversible Gromov–Monge (RGM) distance, providing mathematical formalism behind the new sampler. We study the statistical rate of convergence of the new transform sampler, along with several analytic properties of the RGM distance and operator viewpoints of transform sampling. Synthetic and real-world examples showcasing the effectiveness of the new sampler are also demonstrated.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-04-05T07:00:00Z
      DOI: 10.1137/23M1550384
      Issue No: Vol. 6, No. 2 (2024)
       
  • Optimal Estimation of Smooth Transport Maps with Kernel SoS

    • Free pre-print version: Loading...

      Authors: Adrien Vacher, Boris Muzellec, Francis Bach, François-Xavier Vialard, Alessandro Rudi
      Pages: 311 - 342
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 2, Page 311-342, June 2024.
      Abstract.Under smoothness conditions, it was recently shown by Vacher et al. [Proceedings of the 34th Conference on Learning Theory, Proc. Mach. Learn. Res. 134, 2021] that the squared Wasserstein distance between two distributions could be approximately computed in polynomial time with appealing statistical error bounds. In this paper, we propose to extend their result to the problem of estimating in [math] distance the transport map between two distributions. Also building upon the kernelized sum-of-squares approach, a way to model smooth positive functions, we derive a computationally tractable estimator of the transport map. Contrary to the aforementioned work, the dual problem that we solve is closer to the so-called semidual formulation of optimal transport that is known to gain convexity with respect to the linear dual formulation. After deriving a new stability result on the semidual and using localization-like techniques through Gagliardo–Nirenberg inequalities, we manage to prove under the same assumptions as in Vacher et al. that our estimator is minimax optimal up to polylog factors. Then we prove that this estimator can be computed in the worst case in [math] time, where [math] is the number of samples, and show how to improve its practical computation with a Nyström approximation scheme, a classical tool in kernel methods. Finally, we showcase several numerical simulations in medium dimension, where we compute our estimator on simple examples.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-04-15T07:00:00Z
      DOI: 10.1137/22M1528847
      Issue No: Vol. 6, No. 2 (2024)
       
  • Learning Functions Varying along a Central Subspace

    • Free pre-print version: Loading...

      Authors: Hao Liu, Wenjing Liao
      Pages: 343 - 371
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 2, Page 343-371, June 2024.
      Abstract. Many functions of interest are in a high-dimensional space but exhibit low-dimensional structures. This paper studies regression of an [math]-Hölder function [math] in [math] which varies along a central subspace of dimension [math] while [math]. A direct approximation of [math] in [math] with an [math] accuracy requires the number of samples [math] in the order of [math]. In this paper, we analyze the generalized contour regression (GCR) algorithm for the estimation of the central subspace and use piecewise polynomials for function approximation. GCR is among the best estimators for the central subspace, but its sample complexity is an open question. In this paper, we partially answer this questions by proving that if a variance quantity is exactly known, GCR leads to a mean squared estimation error of [math] for the central subspace. The estimation error of this variance quantity is also given in this paper. The mean squared regression error of [math] is proved to be in the order of [math], where the exponent depends on the dimension of the central subspace [math] instead of the ambient space [math]. This result demonstrates that GCR is effective in learning the low-dimensional central subspace. We also propose a modified GCR with improved efficiency. The convergence rate is validated through several numerical experiments.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-04-22T07:00:00Z
      DOI: 10.1137/23M1557751
      Issue No: Vol. 6, No. 2 (2024)
       
  • Structural Balance and Random Walks on Complex Networks with Complex
           Weights

    • Free pre-print version: Loading...

      Authors: Yu Tian, Renaud Lambiotte
      Pages: 372 - 399
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 2, Page 372-399, June 2024.
      Abstract. Complex numbers define the relationship between entities in many situations. A canonical example would be the off-diagonal terms in a Hamiltonian matrix in quantum physics. Recent years have seen an increasing interest to extend the tools of network science when the weight of edges are complex numbers. Here, we focus on the case when the weight matrix is Hermitian, a reasonable assumption in many applications, and investigate both structural and dynamical properties of the networks with complex weights. Building on concepts from signed graphs, we introduce a classification of complex-weighted networks based on the notion of structural balance and illustrate the shared spectral properties within each type. We then apply the results to characterize the dynamics of random walks on complex-weighted networks, where local consensus can be achieved asymptotically when the graph is structurally balanced, while global consensus will be obtained when it is strictly unbalanced. Finally, we explore potential applications of our findings by generalizing the notion of cut and propose an associated spectral clustering algorithm. We also provide further characteristics of the magnetic Laplacian, associating directed networks to complex-weighted ones. The performance of the algorithm is verified on both synthetic and real networks.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-05-02T07:00:00Z
      DOI: 10.1137/23M1584265
      Issue No: Vol. 6, No. 2 (2024)
       
  • Rigorous Dynamical Mean-Field Theory for Stochastic Gradient Descent
           Methods

    • Free pre-print version: Loading...

      Authors: Cédric Gerbelot, Emanuele Troiani, Francesca Mignacco, Florent Krzakala, Lenka Zdeborová
      Pages: 400 - 427
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 2, Page 400-427, June 2024.
      Abstract.We prove closed-form equations for the exact high-dimensional asymptotics of a family of first-order gradient-based methods, learning an estimator (e.g., M-estimator, shallow neural network) from observations on Gaussian data with empirical risk minimization. This includes widely used algorithms such as stochastic gradient descent (SGD) or Nesterov acceleration. The obtained equations match those resulting from the discretization of dynamical mean-field theory equations from statistical physics when applied to the corresponding gradient flow. Our proof method allows us to give an explicit description of how memory kernels build up in the effective dynamics and to include nonseparable update functions, allowing datasets with nonidentity covariance matrices. Finally, we provide numerical implementations of the equations for SGD with generic extensive batch size and constant learning rates.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-05-06T07:00:00Z
      DOI: 10.1137/23M1594388
      Issue No: Vol. 6, No. 2 (2024)
       
  • Robust and Tuning-Free Sparse Linear Regression via Square-Root Slope

    • Free pre-print version: Loading...

      Authors: Stanislav Minsker, Mohamed Ndaoud, Lang Wang
      Pages: 428 - 453
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 2, Page 428-453, June 2024.
      Abstract.We consider the high-dimensional linear regression model and assume that a fraction of the measurements are altered by an adversary with complete knowledge of the data and the underlying distribution. We are interested in a scenario where dense additive noise is heavy-tailed, while the measurement vectors follow a sub-Gaussian distribution. Within this framework, we establish minimax lower bounds for the performance of an arbitrary estimator that depend on the fraction of corrupted observations as well as the tail behavior of the additive noise. Moreover, we design a modification of the so-called square-root Slope estimator with several desirable features: (a) It is provably robust to adversarial contamination and satisfies performance guarantees in the form of sub-Gaussian deviation inequalities that match the lower error bounds, up to logarithmic factors; (b) it is fully adaptive with respect to the unknown sparsity level and the variance of the additive noise; and (c) it is computationally tractable as a solution of a convex optimization problem. To analyze performance of the proposed estimator, we prove several properties of matrices with sub-Gaussian rows that may be of independent interest.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-05-31T07:00:00Z
      DOI: 10.1137/23M1608690
      Issue No: Vol. 6, No. 2 (2024)
       
  • The Common Intuition to Transfer Learning Can Win or Lose: Case Studies
           for Linear Regression

    • Free pre-print version: Loading...

      Authors: Yehuda Dar, Daniel LeJeune, Richard G. Baraniuk
      Pages: 454 - 480
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 2, Page 454-480, June 2024.
      Abstract.We study a fundamental transfer learning process from source to target linear regression tasks, including overparameterized settings where there are more learned parameters than data samples. The target task learning is addressed by using its training data together with the parameters previously computed for the source task. We define a transfer learning approach to the target task as a linear regression optimization with a regularization on the distance between the to-be-learned target parameters and the already learned source parameters. We analytically characterize the generalization performance of our transfer learning approach and demonstrate its ability to resolve the peak in generalization errors in double descent phenomena of the minimum [math]-norm solution to linear regression. Moreover, we show that for sufficiently related tasks, the optimally tuned transfer learning approach can outperform the optimally tuned ridge regression method, even when the true parameter vector conforms to an isotropic Gaussian prior distribution. Namely, we demonstrate that transfer learning can beat the minimum mean square error (MMSE) solution of the independent target task. Our results emphasize the ability of transfer learning to extend the solution space to the target task and, by that, to have an improved MMSE solution. We formulate the linear MMSE solution to our transfer learning setting and point out its key differences from the common design philosophy to transfer learning.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-05-31T07:00:00Z
      DOI: 10.1137/23M1563062
      Issue No: Vol. 6, No. 2 (2024)
       
  • Scalable Tensor Methods for Nonuniform Hypergraphs

    • Free pre-print version: Loading...

      Authors: Sinan G. Aksoy, Ilya Amburg, Stephen J. Young
      Pages: 481 - 503
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 2, Page 481-503, June 2024.
      Abstract.While multilinear algebra appears natural for studying the multiway interactions modeled by hypergraphs, tensor methods for general hypergraphs have been stymied by theoretical and practical barriers. A recently proposed adjacency tensor is applicable to nonuniform hypergraphs but is prohibitively costly to form and analyze in practice. We develop tensor times same vector (TTSV) algorithms for this tensor, which improve complexity from [math] to a low-degree polynomial in [math], where [math] is the number of vertices and [math] is the maximum hyperedge size. Our algorithms are implicit, avoiding formation of the order [math] adjacency tensor. We demonstrate the flexibility and utility of our approach in practice by developing tensor-based hypergraph centrality and clustering algorithms. We also show that these tensor measures offer complementary information to analogous graph-reduction approaches on data and are also able to detect higher-order structure that many existing matrix-based approaches provably cannot.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-06-11T07:00:00Z
      DOI: 10.1137/23M1584472
      Issue No: Vol. 6, No. 2 (2024)
       
  • The Geometric Median and Applications to Robust Mean Estimation

    • Free pre-print version: Loading...

      Authors: Stanislav Minsker, Nate Strawn
      Pages: 504 - 533
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 2, Page 504-533, June 2024.
      Abstract.This paper is devoted to the statistical and numerical properties of the geometric median and its applications to the problem of robust mean estimation via the median of means principle. Our main theoretical results include (a) an upper bound for the distance between the mean and the median for general absolutely continuous distributions in [math], and examples of specific classes of distributions for which these bounds do not depend on the ambient dimension [math]; (b) exponential deviation inequalities for the distance between the sample and the population versions of the geometric median, which again depend only on the trace-type quantities and not on the ambient dimension. As a corollary, we deduce improved bounds for the (geometric) median of means estimator that hold for large classes of heavy-tailed distributions. Finally, we address the error of numerical approximation, which is an important practical aspect of any statistical estimation procedure. We demonstrate that the objective function minimized by the geometric median satisfies a “local quadratic growth” condition that allows one to translate suboptimality bounds for the objective function to the corresponding bounds for the numerical approximation to the median itself and propose a simple stopping rule applicable to any optimization method which yields explicit error guarantees. We conclude with the numerical experiments, including the application to estimation of mean values of log-returns for S&P 500 data.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-06-12T07:00:00Z
      DOI: 10.1137/23M1592420
      Issue No: Vol. 6, No. 2 (2024)
       
  • Max-Affine Regression via First-Order Methods

    • Free pre-print version: Loading...

      Authors: Seonho Kim, Kiryung Lee
      Pages: 534 - 552
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 2, Page 534-552, June 2024.
      Abstract. We consider regression of a max-affine model that produces a piecewise linear model by combining affine models via the max function. The max-affine model ubiquitously arises in applications in signal processing and statistics including multiclass classification, auction problems, and convex regression. It also generalizes phase retrieval and learning rectifier linear unit activation functions. We present a nonasymptotic convergence analysis of gradient descent (GD) and mini-batch stochastic gradient descent (SGD) for max-affine regression when the model is observed at random locations following the sub-Gaussianity and an anticoncentration with additive sub-Gaussian noise. Under these assumptions, a suitably initialized GD and SGD converge linearly to a neighborhood of the ground truth specified by the corresponding error bound. We provide numerical results that corroborate the theoretical finding. Importantly, SGD not only converges faster in run time with fewer observations than alternating minimization and GD in the noiseless scenario but also outperforms them in low-sample scenarios with noise.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-06-20T07:00:00Z
      DOI: 10.1137/23M1594662
      Issue No: Vol. 6, No. 2 (2024)
       
  • A Diffusion Process Perspective on Posterior Contraction Rates for
           Parameters

    • Free pre-print version: Loading...

      Authors: Wenlong Mou, Nhat Ho, Martin Wainwright, Peter Bartlett, Michael Jordan
      Pages: 553 - 577
      Abstract: SIAM Journal on Mathematics of Data Science, Volume 6, Issue 2, Page 553-577, June 2024.
      Abstract.We analyze the posterior contraction rates of parameters in Bayesian models via the Langevin diffusion process, in particular by controlling moments of the stochastic process and taking limits. Analogous to the nonasymptotic analysis of statistical [math]-estimators and stochastic optimization algorithms, our contraction rates depend on the structure of the population log-likelihood function, and stochastic perturbation bounds between the population and sample log-likelihood functions. Convergence rates are determined by a nonlinear equation that relates the population-level structure to stochastic perturbation terms, along with a term characterizing the diffusive behavior. Based on this technique, we also prove nonasymptotic versions of a Bernstein–von Mises guarantee for the posterior. We illustrate this general theory by deriving posterior convergence rates for various concrete examples.
      Citation: SIAM Journal on Mathematics of Data Science
      PubDate: 2024-06-20T07:00:00Z
      DOI: 10.1137/22M1516038
      Issue No: Vol. 6, No. 2 (2024)
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 18.97.14.85
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-
JournalTOCs
 
 
  Subjects -> MATHEMATICS (Total: 1013 journals)
    - APPLIED MATHEMATICS (92 journals)
    - GEOMETRY AND TOPOLOGY (23 journals)
    - MATHEMATICS (714 journals)
    - MATHEMATICS (GENERAL) (45 journals)
    - NUMERICAL ANALYSIS (26 journals)
    - PROBABILITIES AND MATH STATISTICS (113 journals)

PROBABILITIES AND MATH STATISTICS (113 journals)                     

Showing 1 - 86 of 86 Journals sorted alphabetically
Advances in Statistics     Open Access   (Followers: 10)
Afrika Statistika     Open Access   (Followers: 1)
American Journal of Applied Mathematics and Statistics     Open Access   (Followers: 13)
American Journal of Mathematics and Statistics     Open Access   (Followers: 9)
Annals of Data Science     Hybrid Journal   (Followers: 15)
Applied Medical Informatics     Open Access   (Followers: 12)
Asian Journal of Mathematics & Statistics     Open Access   (Followers: 7)
Asian Journal of Probability and Statistics     Open Access  
Austrian Journal of Statistics     Open Access   (Followers: 4)
Biostatistics & Epidemiology     Hybrid Journal   (Followers: 6)
Calcutta Statistical Association Bulletin     Hybrid Journal  
Communications in Mathematics and Statistics     Hybrid Journal   (Followers: 3)
Communications in Statistics - Simulation and Computation     Hybrid Journal   (Followers: 9)
Communications in Statistics: Case Studies, Data Analysis and Applications     Hybrid Journal  
Comunicaciones en Estadística     Open Access  
Econometrics and Statistics     Hybrid Journal   (Followers: 2)
Forecasting     Open Access   (Followers: 1)
Foundations and Trends® in Optimization     Full-text available via subscription   (Followers: 2)
Geoinformatics & Geostatistics     Hybrid Journal   (Followers: 10)
Geomatics, Natural Hazards and Risk     Open Access   (Followers: 14)
Indonesian Journal of Applied Statistics     Open Access  
International Game Theory Review     Hybrid Journal  
International Journal of Advanced Statistics and IT&C for Economics and Life Sciences     Open Access  
International Journal of Advanced Statistics and Probability     Open Access   (Followers: 7)
International Journal of Algebra and Statistics     Open Access   (Followers: 4)
International Journal of Applied Mathematics and Statistics     Full-text available via subscription   (Followers: 4)
International Journal of Ecological Economics and Statistics     Full-text available via subscription   (Followers: 4)
International Journal of Game Theory     Hybrid Journal   (Followers: 3)
International Journal of Mathematics and Statistics     Full-text available via subscription   (Followers: 2)
International Journal of Multivariate Data Analysis     Hybrid Journal  
International Journal of Probability and Statistics     Open Access   (Followers: 3)
International Journal of Statistics & Economics     Full-text available via subscription   (Followers: 6)
International Journal of Statistics and Applications     Open Access   (Followers: 2)
International Journal of Statistics and Probability     Open Access   (Followers: 3)
International Journal of Statistics in Medical Research     Hybrid Journal   (Followers: 2)
International Journal of Testing     Hybrid Journal   (Followers: 1)
Iraqi Journal of Statistical Sciences     Open Access  
Japanese Journal of Statistics and Data Science     Hybrid Journal  
Journal of Biometrics & Biostatistics     Open Access   (Followers: 4)
Journal of Cost Analysis and Parametrics     Hybrid Journal   (Followers: 5)
Journal of Environmental Statistics     Open Access   (Followers: 4)
Journal of Game Theory     Open Access   (Followers: 1)
Journal of Mathematical Economics and Finance     Full-text available via subscription  
Journal of Mathematics and Statistics Studies     Open Access  
Journal of Modern Applied Statistical Methods     Open Access   (Followers: 1)
Journal of Official Statistics     Open Access   (Followers: 2)
Journal of Quantitative Economics     Hybrid Journal  
Journal of Social and Economic Statistics     Open Access   (Followers: 3)
Journal of Statistical Theory and Practice     Hybrid Journal   (Followers: 2)
Journal of Statistics and Data Science Education     Open Access   (Followers: 3)
Journal of Survey Statistics and Methodology     Hybrid Journal   (Followers: 5)
Journal of the Indian Society for Probability and Statistics     Full-text available via subscription  
Jurnal Biometrika dan Kependudukan     Open Access   (Followers: 1)
Lietuvos Statistikos Darbai     Open Access   (Followers: 1)
Mathematics and Statistics     Open Access   (Followers: 3)
Methods, Data, Analyses     Open Access   (Followers: 1)
METRON     Hybrid Journal   (Followers: 2)
Nepalese Journal of Statistics     Open Access   (Followers: 1)
North American Actuarial Journal     Hybrid Journal   (Followers: 2)
Open Journal of Statistics     Open Access   (Followers: 3)
Open Mathematics, Statistics and Probability Journal     Open Access  
Pakistan Journal of Statistics and Operation Research     Open Access   (Followers: 1)
Physica A: Statistical Mechanics and its Applications     Hybrid Journal   (Followers: 7)
Probability, Uncertainty and Quantitative Risk     Open Access   (Followers: 2)
Research & Reviews : Journal of Statistics     Open Access   (Followers: 4)
Revista Brasileira de Biometria     Open Access  
Revista Colombiana de Estadística     Open Access  
RMS : Research in Mathematics & Statistics     Open Access   (Followers: 1)
Sankhya B - Applied and Interdisciplinary Statistics     Hybrid Journal  
SIAM Journal on Mathematics of Data Science     Hybrid Journal   (Followers: 5)
SIAM/ASA Journal on Uncertainty Quantification     Hybrid Journal   (Followers: 3)
Spatial Statistics     Hybrid Journal   (Followers: 2)
Stat     Hybrid Journal   (Followers: 1)
Stata Journal     Full-text available via subscription   (Followers: 10)
Statistica     Open Access   (Followers: 6)
Statistical Analysis and Data Mining     Hybrid Journal   (Followers: 23)
Statistical Theory and Related Fields     Hybrid Journal  
Statistics and Public Policy     Open Access   (Followers: 3)
Statistics in Transition New Series : An International Journal of the Polish Statistical Association     Open Access  
Statistics Research Letters     Open Access   (Followers: 1)
Statistics, Optimization & Information Computing     Open Access   (Followers: 5)
Stats     Open Access  
Theory of Probability and its Applications     Hybrid Journal   (Followers: 2)
Theory of Probability and Mathematical Statistics     Full-text available via subscription   (Followers: 2)
Turkish Journal of Forecasting     Open Access   (Followers: 1)
Zeitschrift für die gesamte Versicherungswissenschaft     Hybrid Journal  

           

Similar Journals
Similar Journals
HOME > Browse the 73 Subjects covered by JournalTOCs  
SubjectTotal Journals
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 18.97.14.85
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-