Subjects -> MATHEMATICS (Total: 1013 journals)
    - APPLIED MATHEMATICS (92 journals)
    - GEOMETRY AND TOPOLOGY (23 journals)
    - MATHEMATICS (714 journals)
    - MATHEMATICS (GENERAL) (45 journals)
    - NUMERICAL ANALYSIS (26 journals)
    - PROBABILITIES AND MATH STATISTICS (113 journals)

MATHEMATICS (714 journals)            First | 1 2 3 4     

Showing 601 - 538 of 538 Journals sorted alphabetically
Research in the Mathematical Sciences     Open Access  
Research Journal of Pure Algebra     Open Access   (Followers: 1)
Researches in Mathematics     Open Access  
Results in Control and Optimization     Open Access  
Results in Mathematics     Hybrid Journal  
Results in Nonlinear Analysis     Open Access  
Review of Symbolic Logic     Full-text available via subscription   (Followers: 2)
Reviews in Mathematical Physics     Hybrid Journal   (Followers: 1)
Revista Baiana de Educação Matemática     Open Access  
Revista Bases de la Ciencia     Open Access  
Revista BoEM - Boletim online de Educação Matemática     Open Access  
Revista Colombiana de Matemáticas     Open Access   (Followers: 1)
Revista de Ciencias     Open Access  
Revista de Educación Matemática     Open Access  
Revista de la Escuela de Perfeccionamiento en Investigación Operativa     Open Access  
Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales. Serie A. Matematicas     Partially Free  
Revista de Matemática : Teoría y Aplicaciones     Open Access   (Followers: 1)
Revista Digital: Matemática, Educación e Internet     Open Access  
Revista Electrónica de Conocimientos, Saberes y Prácticas     Open Access  
Revista Integración : Temas de Matemáticas     Open Access  
Revista Internacional de Sistemas     Open Access  
Revista Latinoamericana de Etnomatemática     Open Access  
Revista Latinoamericana de Investigación en Matemática Educativa     Open Access  
Revista Matemática Complutense     Hybrid Journal  
Revista REAMEC : Rede Amazônica de Educação em Ciências e Matemática     Open Access  
Revista SIGMA     Open Access  
Ricerche di Matematica     Hybrid Journal  
RMS : Research in Mathematics & Statistics     Open Access  
Royal Society Open Science     Open Access   (Followers: 7)
Russian Journal of Mathematical Physics     Full-text available via subscription  
Russian Mathematics     Hybrid Journal  
Sahand Communications in Mathematical Analysis     Open Access  
Sampling Theory, Signal Processing, and Data Analysis     Hybrid Journal  
São Paulo Journal of Mathematical Sciences     Hybrid Journal  
Science China Mathematics     Hybrid Journal   (Followers: 1)
Science Progress     Full-text available via subscription   (Followers: 1)
Sciences & Technologie A : sciences exactes     Open Access  
Selecta Mathematica     Hybrid Journal   (Followers: 1)
SeMA Journal     Hybrid Journal  
Semigroup Forum     Hybrid Journal   (Followers: 1)
Set-Valued and Variational Analysis     Hybrid Journal  
SIAM Journal on Applied Mathematics     Hybrid Journal   (Followers: 11)
SIAM Journal on Computing     Hybrid Journal   (Followers: 11)
SIAM Journal on Control and Optimization     Hybrid Journal   (Followers: 18)
SIAM Journal on Discrete Mathematics     Hybrid Journal   (Followers: 8)
SIAM Journal on Financial Mathematics     Hybrid Journal   (Followers: 3)
SIAM Journal on Mathematics of Data Science     Hybrid Journal   (Followers: 1)
SIAM Journal on Matrix Analysis and Applications     Hybrid Journal   (Followers: 3)
SIAM Journal on Optimization     Hybrid Journal   (Followers: 12)
Siberian Advances in Mathematics     Hybrid Journal  
Siberian Mathematical Journal     Hybrid Journal  
Sigmae     Open Access  
SILICON     Hybrid Journal  
SN Partial Differential Equations and Applications     Hybrid Journal  
Soft Computing     Hybrid Journal   (Followers: 7)
Statistics and Computing     Hybrid Journal   (Followers: 14)
Stochastic Analysis and Applications     Hybrid Journal   (Followers: 3)
Stochastic Partial Differential Equations : Analysis and Computations     Hybrid Journal   (Followers: 2)
Stochastic Processes and their Applications     Hybrid Journal   (Followers: 6)
Stochastics and Dynamics     Hybrid Journal   (Followers: 2)
Studia Scientiarum Mathematicarum Hungarica     Full-text available via subscription   (Followers: 1)
Studia Universitatis Babeș-Bolyai Informatica     Open Access  
Studies In Applied Mathematics     Hybrid Journal   (Followers: 1)
Studies in Mathematical Sciences     Open Access   (Followers: 1)
Superficies y vacio     Open Access  
Suska Journal of Mathematics Education     Open Access   (Followers: 1)
Swiss Journal of Geosciences     Hybrid Journal   (Followers: 1)
Synthesis Lectures on Algorithms and Software in Engineering     Full-text available via subscription   (Followers: 2)
Synthesis Lectures on Mathematics and Statistics     Full-text available via subscription   (Followers: 1)
Tamkang Journal of Mathematics     Open Access  
Tatra Mountains Mathematical Publications     Open Access  
Teaching Mathematics     Full-text available via subscription   (Followers: 10)
Teaching Mathematics and its Applications: An International Journal of the IMA     Hybrid Journal   (Followers: 4)
Teaching Statistics     Hybrid Journal   (Followers: 8)
Technometrics     Full-text available via subscription   (Followers: 8)
The Journal of Supercomputing     Hybrid Journal   (Followers: 1)
The Mathematica journal     Open Access  
The Mathematical Gazette     Full-text available via subscription   (Followers: 1)
The Mathematical Intelligencer     Hybrid Journal  
The Ramanujan Journal     Hybrid Journal  
The VLDB Journal     Hybrid Journal   (Followers: 2)
Theoretical and Mathematical Physics     Hybrid Journal   (Followers: 7)
Theory and Applications of Graphs     Open Access  
Topological Methods in Nonlinear Analysis     Full-text available via subscription  
Transactions of the London Mathematical Society     Open Access   (Followers: 1)
Transformation Groups     Hybrid Journal  
Turkish Journal of Mathematics     Open Access  
Ukrainian Mathematical Journal     Hybrid Journal  
Uniciencia     Open Access  
Uniform Distribution Theory     Open Access  
Unisda Journal of Mathematics and Computer Science     Open Access  
Unnes Journal of Mathematics     Open Access   (Followers: 1)
Unnes Journal of Mathematics Education     Open Access   (Followers: 2)
Unnes Journal of Mathematics Education Research     Open Access   (Followers: 1)
Ural Mathematical Journal     Open Access  
Vestnik Samarskogo Gosudarstvennogo Tekhnicheskogo Universiteta. Seriya Fiziko-Matematicheskie Nauki     Open Access  
Vestnik St. Petersburg University: Mathematics     Hybrid Journal  
VFAST Transactions on Mathematics     Open Access   (Followers: 1)
Vietnam Journal of Mathematics     Hybrid Journal  
Vinculum     Full-text available via subscription  
Visnyk of V. N. Karazin Kharkiv National University. Ser. Mathematics, Applied Mathematics and Mechanics     Open Access   (Followers: 2)
Water SA     Open Access   (Followers: 1)
Water Waves     Hybrid Journal  
Zamm-Zeitschrift Fuer Angewandte Mathematik Und Mechanik     Hybrid Journal   (Followers: 1)
ZDM     Hybrid Journal   (Followers: 2)
Zeitschrift für angewandte Mathematik und Physik     Hybrid Journal   (Followers: 2)
Zeitschrift fur Energiewirtschaft     Hybrid Journal  
Zetetike     Open Access  

  First | 1 2 3 4     

Similar Journals
Journal Cover
SIAM Journal on Matrix Analysis and Applications
Journal Prestige (SJR): 1.739
Citation Impact (citeScore): 2
Number of Followers: 3  
 
Hybrid Journal Hybrid journal   * Containing 1 Open Access Open Access article(s) in this issue *
ISSN (Print) 0895-4798 - ISSN (Online) 1095-7162
Published by Society for Industrial and Applied Mathematics Homepage  [17 journals]
  • Refinement of Hottopixx Method for Nonnegative Matrix Factorization Under
           Noisy Separability

    • Free pre-print version: Loading...

      Authors: Tomohiko Mizutani
      Pages: 1029 - 1057
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1029-1057, September 2022.
      Hottopixx, proposed by Bittorf et al. at NIPS 2012, is an algorithm for solving nonnegative matrix factorization (NMF) problems under the separability assumption. Separable NMFs have important applications, such as topic extraction from documents and unmixing of hyperspectral images. In such applications, the robustness of the algorithm to noise is the key to success. Hottopixx has been shown to be robust to noise, and its robustness can be further enhanced through postprocessing. However, there is a drawback. Hottopixx and its postprocessing require us to estimate the noise level involved in the matrix we want to factorize before running, since they use it as part of the input data. The noise-level estimation is not an easy task. In this paper, we overcome this drawback. We present a refinement of Hottopixx and its postprocessing that runs without prior knowledge of the noise level. We show that the refinement has almost the same robustness to noise as the original algorithm.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-07-07T07:00:00Z
      DOI: 10.1137/21M1442206
      Issue No: Vol. 43, No. 3 (2022)
       
  • Stable and Efficient Computation of Generalized Polar Decompositions

    • Free pre-print version: Loading...

      Authors: Peter Benner, Yuji Nakatsukasa, Carolin Penke
      Pages: 1058 - 1083
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1058-1083, September 2022.
      We present methods for computing the generalized polar decomposition of a matrix based on the dynamically weighted Halley iteration. This method is well established for computing the standard polar decomposition. A stable implementation is available, where matrix inversion is avoided and QR decompositions are used instead. We establish a natural generalization of this approach for computing generalized polar decompositions with respect to signature matrices. Again the inverse can be avoided by using a generalized QR decomposition called hyperbolic QR decomposition. However, this decomposition does not show the same favorable stability properties as its orthogonal counterpart. We overcome the numerical difficulties by generalizing the CholeskyQR2 method. This method computes the standard QR factorization in a stable way via two successive Cholesky factorizations. An even better numerical stability is achieved by employing permuted graph bases, yielding residuals of order $10^{-14}$ even for badly conditioned matrices, where other methods fail.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-07-11T07:00:00Z
      DOI: 10.1137/21M1411986
      Issue No: Vol. 43, No. 3 (2022)
       
  • An Algebraic Quantum Circuit Compression Algorithm for Hamiltonian
           Simulation

    • Free pre-print version: Loading...

      Authors: Daan Camps, Efekan Kökcü, Lindsay Bassman, Wibe A. de Jong, Alexander F. Kemper, Roel Van Beeumen
      Pages: 1084 - 1108
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1084-1108, September 2022.
      Quantum computing is a promising technology that harnesses the peculiarities of quantum mechanics to deliver computational speedups for some problems that are intractable to solve on a classical computer. Current generation noisy intermediate-scale quantum (NISQ) computers are severely limited in terms of chip size and error rates. Shallow quantum circuits with uncomplicated topologies are essential for successful applications in the NISQ era. Based on matrix analysis, we derive localized circuit transformations to efficiently compress quantum circuits for simulation of certain spin Hamiltonians known as free fermions. The depth of the compressed circuits is independent of simulation time and grows linearly with the number of spins. The proposed numerical circuit compression algorithm behaves backward stable and scales cubically in the number of spins enabling circuit synthesis beyond $\mathcal{O}(10^3)$ spins. The resulting quantum circuits have a simple nearest-neighbor topology, which makes them ideally suited for NISQ devices.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-07-11T07:00:00Z
      DOI: 10.1137/21M1439298
      Issue No: Vol. 43, No. 3 (2022)
       
  • Analytical Singular Value Decomposition for a Class of Stoichiometry
           Matrices

    • Free pre-print version: Loading...

      Authors: Jacqueline Wentz, Jeffrey C. Cameron, David M. Bortz
      Pages: 1109 - 1147
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1109-1147, September 2022.
      We present the analytical singular value decomposition of the stoichiometry matrix for a spatially discrete reaction-diffusion system. The motivation for this work is to develop a matrix decomposition that can reveal hidden spatial flux patterns of chemical reactions. We consider a 1D domain with two subregions sharing a single common boundary. Each of the subregions is further partitioned into a finite number of compartments. Chemical reactions can occur within a compartment, whereas diffusion is represented as movement between adjacent compartments. Inspired by biology, we study both (1) the case where the reactions on each side of the boundary are different and only certain species diffuse across the boundary and (2) the case where reactions and diffusion are spatially homogeneous. We write the stoichiometry matrix for these two classes of systems using a Kronecker product formulation. For the first scenario, we apply linear perturbation theory to derive an approximate singular value decomposition in the limit as diffusion becomes much faster than reactions. For the second scenario, we derive an exact analytical singular value decomposition for all relative diffusion and reaction time scales. By writing the stoichiometry matrix using Kronecker products, we show that the singular vectors and values can also be written concisely using Kronecker products. Ultimately, we find that the singular value decomposition of the reaction-diffusion stoichiometry matrix depends on the singular value decompositions of smaller matrices. These smaller matrices represent modified versions of the reaction-only stoichiometry matrices and the analytically known diffusion-only stoichiometry matrix. Lastly, we present the singular value decomposition of the model for the Calvin cycle in cyanobacteria and demonstrate the accuracy of our formulation. The MATLAB code, available at www.github.com/MathBioCU/ReacDiffStoicSVD, provides routines for efficiently calculating the SVD for a given reaction network on a 1D spatial domain.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-07-13T07:00:00Z
      DOI: 10.1137/21M1418927
      Issue No: Vol. 43, No. 3 (2022)
       
  • Takagi Factorization of Matrices Depending on Parameters and Locating
           Degeneracies of Singular Values

    • Free pre-print version: Loading...

      Authors: Luca Dieci, Alessandra Papini, Alessandro Pugliese
      Pages: 1148 - 1161
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1148-1161, September 2022.
      In this work, we consider the Takagi factorization of a matrix valued function depending on parameters. We give smoothness and genericity results and pay particular attention to the concerns caused by having either a singular value equal to 0 or multiple singular values. For these phenomena, we give theoretical results showing that their codimension is 2, and we further develop and test numerical methods to locate in parameter space values where these occurrences take place. Numerical study of the density of these occurrences is performed.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-07-13T07:00:00Z
      DOI: 10.1137/21M1456273
      Issue No: Vol. 43, No. 3 (2022)
       
  • Improved Variants of the Hutch++ Algorithm for Trace Estimation

    • Free pre-print version: Loading...

      Authors: David Persson, Alice Cortinovis, Daniel Kressner
      Pages: 1162 - 1185
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1162-1185, September 2022.
      This paper is concerned with two improved variants of the Hutch++ algorithm for estimating the trace of a square matrix, implicitly given through matrix-vector products. Hutch++ combines randomized low-rank approximation in a first phase with stochastic trace estimation in a second phase. In turn, Hutch++ only requires $O\left(\varepsilon^{-1}\right)$ matrix-vector products to approximate the trace within a relative error $\varepsilon$ with high probability, provided that the matrix is symmetric positive semidefinite. This compares favorably with the $O\left(\varepsilon^{-2}\right)$ matrix-vector products needed when using stochastic trace estimation alone. In Hutch++, the number of matrix-vector products is fixed a priori and distributed in a prescribed fashion among the two phases. In this work, we derive an adaptive variant of Hutch++, which outputs an estimate of the trace that is within some prescribed error tolerance with a controllable failure probability, while splitting the matrix-vector products in a near-optimal way among the two phases. For the special case of a symmetric positive semidefinite matrix, we present another variant of Hutch++, called Nyström++, which utilizes the so-called Nyström approximation and requires only one pass over the matrix, as compared to two passes with Hutch++. We extend the analysis of Hutch++ to Nyström++. Numerical experiments demonstrate the effectiveness of our two new algorithms.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-07-13T07:00:00Z
      DOI: 10.1137/21M1447623
      Issue No: Vol. 43, No. 3 (2022)
       
  • Matrix Pencils with Coefficients that have Positive Semidefinite Hermitian
           Parts

    • Free pre-print version: Loading...

      Authors: C. Mehl, V. Mehrmann, M. Wojtylak
      Pages: 1186 - 1212
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1186-1212, September 2022.
      We analyze when an arbitrary matrix pencil is strictly equivalent to a dissipative Hamiltonian pencil and show that this heavily restricts the spectral properties. In order to relax the spectral properties, we introduce matrix pencils with coefficients that have positive semidefinite Hermitian parts. We relate the Kronecker structure of these pencils to that of an underlying skew-Hermitian pencil and discuss their regularity, index, numerical range, and location of eigenvalues. Further, we study matrix polynomials with positive semidefinite Hermitian coefficients and use linearizations with positive semidefinite Hermitian parts to derive sufficient conditions for having a spectrum in the left half plane and to derive bounds on the index.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-07-14T07:00:00Z
      DOI: 10.1137/21M1439997
      Issue No: Vol. 43, No. 3 (2022)
       
  • Diagonal Scalings for the Eigenstructure of Arbitrary Pencils

    • Free pre-print version: Loading...

      Authors: Froilán M. Dopico, María C. Quintana, Paul Van Dooren
      Pages: 1213 - 1237
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1213-1237, September 2022.
      In this paper we show how to construct diagonal scalings for arbitrary matrix pencils $\lambda B-A$, in which both $A$ and $B$ are complex matrices (square or nonsquare). The goal of such diagonal scalings is to “balance” in some sense the row and column norms of the pencil. We see that the problem of scaling a matrix pencil is equivalent to the problem of scaling the row and column sums of a particular nonnegative matrix. However, it is known that there exist square and nonsquare nonnegative matrices that cannot be scaled arbitrarily. To address this issue, we consider an approximate embedded problem, in which the corresponding nonnegative matrix is square and can always be scaled. The new scaling methods are then based on the Sinkhorn--Knopp algorithm for scaling a square nonnegative matrix with total support to be doubly stochastic or on a variant of it. In addition, using results of U. G. Rothblum and H. Schneider [Linear Algebra Appl., 114--115 (1989), pp. 737--764], we give simple sufficient conditions on the zero pattern for the existence of diagonal scalings of square nonnegative matrices to have any prescribed common vector for the row and column sums. We illustrate numerically that the new scaling techniques for pencils improve the accuracy of the computation of their eigenvalues.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-07-18T07:00:00Z
      DOI: 10.1137/20M1364011
      Issue No: Vol. 43, No. 3 (2022)
       
  • Convergence to Singular Triplets in the Two-Sided Block-Jacobi SVD
           Algorithm with Dynamic Ordering

    • Free pre-print version: Loading...

      Authors: Gabriel Okša, Yusaku Yamamoto, Marián Vajteršic
      Pages: 1238 - 1262
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1238-1262, September 2022.
      We study the convergence of computed quantities to singular triplets in the serial and parallel block-Jacobi singular value decomposition (SVD) algorithm with dynamic ordering. After eliminating possible zero singular values by two finite decompositions of a matrix $A\in\mathbb{C}^{m\times n},\, m\geq n$, which reduce the matrix dimensions to $n\times n$, it is shown that an iterated nonsingular matrix $A^{(k)}$ converges to a fixed diagonal matrix and its diagonal elements are the singular values of an initial matrix $A$. For the case of simple singular values, it is proved that the corresponding columns of the matrices of accumulated unitary transformations converge to corresponding left and right singular vectors. When a multiple singular value (or a cluster of singular values) is well separated from the other singular values, the convergence of two sequences of appropriate orthogonal projectors towards the orthogonal projectors onto the corresponding left and right subspaces is proved. Additionally, the convergence of orthogonal projectors leads to the convergence of certain computed subspaces towards the singular left and right subspaces spanned by left and right singular vectors corresponding to a multiple singular value or a cluster. An example computed in MATLAB illustrates the developed theory.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-08-01T07:00:00Z
      DOI: 10.1137/21M1411895
      Issue No: Vol. 43, No. 3 (2022)
       
  • A Theory of Quantum Subspace Diagonalization

    • Free pre-print version: Loading...

      Authors: Ethan N. Epperly, Lin Lin, Yuji Nakatsukasa
      Pages: 1263 - 1290
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1263-1290, September 2022.
      Quantum subspace diagonalization methods are an exciting new class of algorithms for solving large-scale eigenvalue problems using quantum computers. Unfortunately, these methods require the solution of an ill-conditioned generalized eigenvalue problem, with a matrix pencil corrupted by a nonnegligible amount of noise that is far above the machine precision. Despite pessimistic predictions from classical worst-case perturbation theories, these methods can perform reliably well if the generalized eigenvalue problem is solved using a standard truncation strategy. By leveraging and advancing classical results in matrix perturbation theory, we provide a theoretical analysis of this surprising phenomenon, proving that under certain natural conditions, a quantum subspace diagonalization algorithm can accurately compute the smallest eigenvalue of a large Hermitian matrix. We give numerical experiments demonstrating the effectiveness of the theory and providing practical guidance for the choice of truncation level. Our new results can also be of independent interest to solving eigenvalue problems outside the context of quantum computation.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-08-01T07:00:00Z
      DOI: 10.1137/21M145954X
      Issue No: Vol. 43, No. 3 (2022)
       
  • Subspaces Analysis for Random Projection UTV Framework

    • Free pre-print version: Loading...

      Authors: Yuan Jian
      Pages: 1291 - 1307
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1291-1307, September 2022.
      The UTV decompositions are promising and computationally efficient alternatives to the singular value decomposition (SVD), which can provide high-quality information about rank, range, and nullspace. However, for large-scale matrices, we want more computationally efficient algorithms. Recently, randomized algorithms with their surprising reliability and computational efficiency have become increasingly popular in many application areas. In this paper, we analyze the subspace properties of a random projection UTV framework and give the bounds of subspace distances between the exact and the approximate singular subspaces, the approximate methods including random projection ULV, random projection URV, and random projection SVD. Numerical experiments demonstrate the effectiveness of the proposed bounds.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-08-04T07:00:00Z
      DOI: 10.1137/21M1451592
      Issue No: Vol. 43, No. 3 (2022)
       
  • Cross-Interactive Residual Smoothing for Global and Block Lanczos-Type
           Solvers for Linear Systems with Multiple Right-Hand Sides

    • Free pre-print version: Loading...

      Authors: Kensuke Aihara, Akira Imakura, Keiichi Morikuni
      Pages: 1308 - 1330
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1308-1330, September 2022.
      Global and block Krylov subspace methods are efficient iterative solvers for large sparse linear systems with multiple right-hand sides. However, global or block Lanczos-type solvers often exhibit large oscillations in the residual norms and may have a large residual gap relating to the loss of attainable accuracy of the approximations. Conventional residual smoothing schemes suppress these oscillations but cannot improve the attainable accuracy, whereas a recent residual smoothing scheme enables the improvement of the attainable accuracy for single right-hand-side Lanczos-type solvers. The underlying concept of this scheme is that the primary and smoothed sequences of the approximations and residuals influence one another, thereby avoiding the severe propagation of rounding errors. In the present study, we extend this cross-interactive residual smoothing to the case of solving linear systems with multiple right-hand sides. The resulting smoothed methods can reduce the residual gap with a low additional cost compared to their original counterparts. We demonstrate the effectiveness of the proposed approach through rounding error analysis and numerical experiments.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-08-08T07:00:00Z
      DOI: 10.1137/21M1436774
      Issue No: Vol. 43, No. 3 (2022)
       
  • A Uniform Spectral Analysis for a Preconditioned All-at-Once System from
           First-Order and Second-Order Evolutionary Problems

    • Free pre-print version: Loading...

      Authors: Shu-Lin Wu, Tao Zhou, Zhi Zhou
      Pages: 1331 - 1353
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1331-1353, September 2022.
      Solving evolutionary equations in a parallel-in-time manner is an attractive topic. The iterative algorithm based on the block $\alpha$-circulant preconditioning technique has shown promising advantages, especially for hyperbolic problems. By fast Fourier transform for factorizing the involved circulant matrices, the preconditioned iteration can be computed efficiently via the so-called diagonalization technique, which yields a direct parallel implementation across all time levels. In recent years, considerable efforts have been devoted to exploring the spectral property of the iteration matrix arising from the used time-integrator, which leads to many case-by-case studies. Denoting by $\mathcal{K} $ and $\mathcal{P}_\alpha$ the all-at-once matrix of the evolutionary PDEs and the corresponding block $\alpha$-circulant preconditioner, we will present a systematic spectral analysis for the matrix $\mathcal{P}_\alpha^{-1}\mathcal{K}$ for both the first-order and second-order evolutionary problems. For the first-order problems our analysis works for all stable single-step time-integrators, while for the second-order problems our analysis works for a large class of symmetric two-step methods which could be arbitrarily high-order. Illustrative numerical experiments are presented to complement our theory.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-08-09T07:00:00Z
      DOI: 10.1137/21M145358X
      Issue No: Vol. 43, No. 3 (2022)
       
  • Strongly Minimal Self-Conjugate Linearizations for Polynomial and Rational
           Matrices

    • Free pre-print version: Loading...

      Authors: Froilán M. Dopico, María C. Quintana, Paul Van Dooren
      Pages: 1354 - 1381
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1354-1381, September 2022.
      We prove that we can always construct strongly minimal linearizations of an arbitrary rational matrix from its Laurent expansion around the point at infinity, which happens to be the case for polynomial matrices expressed in the monomial basis. If the rational matrix has a particular self-conjugate structure, we show how to construct strongly minimal linearizations that preserve it. The structures that are considered are the Hermitian and skew-Hermitian rational matrices with respect to the real line, and the para-Hermitian and para-skew-Hermitian matrices with respect to the imaginary axis. We pay special attention to the construction of strongly minimal linearizations for the particular case of structured polynomial matrices. The proposed constructions lead to efficient numerical algorithms for constructing strongly minimal linearizations. The fact that they are valid for any rational matrix is an improvement on any other previous approach for constructing other classes of structure preserving linearizations, which are not valid for any structured rational or polynomial matrix. The use of the recent concept of strongly minimal linearization is the key for getting such generality. Strongly minimal linearizations are Rosenbrock's polynomial system matrices of the given rational matrix, but with a quadruple of linear polynomial matrices (i.e., pencils): $L(\lambda):=\Big[\begin{array}{ccc} A(\lambda) & -B(\lambda) \\ C(\lambda) & D(\lambda) \end{array}\Big]$, where $A(\lambda)$ is regular, and the pencils $ \left[\begin{array}{ccc} A(\lambda) & -B(\lambda) \end{array}\right]$ and $ \Big[\begin{array}{ccc} A(\lambda) \\ C(\lambda) \end{array}\Big]$ have no finite or infinite eigenvalues. Strongly minimal linearizations contain the complete information about the zeros, poles, and minimal indices of the rational matrix and allow one to very easily recover its eigenvectors and minimal bases. Thus, they can be combined with algorithms for the generalized eigenvalue problem for computing the complete spectral information of the rational matrix.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-08-16T07:00:00Z
      DOI: 10.1137/21M1453542
      Issue No: Vol. 43, No. 3 (2022)
       
  • Infinite GMRES for Parameterized Linear Systems
         This is an Open Access Article Open Access Article

    • Free pre-print version: Loading...

      Authors: Elias Jarlebring, Siobhán Correnty
      Pages: 1382 - 1405
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1382-1405, September 2022.
      We consider linear parameterized systems $A(\mu) x(\mu) = b$ for many different $\mu$, where $A$ is large and sparse and depends nonlinearly on $\mu$. Solving such systems individually for each $\mu$ would require great computational effort. In this work we propose to compute a partial parameterization $\tilde{x} \approx x(\mu)$, where $\tilde{x}(\mu)$ is cheap to evaluate for many $\mu$. Our methods are based on the observation that a companion linearization can be formed where the dependence on $\mu$ is only linear. In particular, methods are presented that combine the well-established Krylov subspace method for linear systems, GMRES, with algorithms for nonlinear eigenvalue problems (NEPs) to generate a basis for the Krylov subspace. Within this new approach, the basis matrix is constructed in three different ways, using a tensor structure and exploiting that certain problems have low-rank properties. The methods are analyzed analogously to the standard convergence theory for the method GMRES for linear systems. More specifically, the error is estimated based on the magnitude of the parameter $\mu$ and the spectrum of the linear companion matrix, which corresponds to the reciprocal solutions to the corresponding NEP. Numerical experiments illustrate the competitiveness of the methods for large-scale problems. The simulations are reproducible and publicly available online.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-08-16T07:00:00Z
      DOI: 10.1137/21M1410324
      Issue No: Vol. 43, No. 3 (2022)
       
  • A Note on Inexact Inner Products in GMRES

    • Free pre-print version: Loading...

      Authors: Serge Gratton, Ehouarn Simon, David Titley-Peloquin, Philippe L. Toint
      Pages: 1406 - 1422
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1406-1422, September 2022.
      We show to what extent the accuracy of the inner products computed in the GMRES iterative solver can be reduced as the iterations proceed without affecting the convergence rate or final accuracy achieved by the iterates. We bound the loss of orthogonality in GMRES with inexact inner products. We use this result to bound the ratio of the residual norm in inexact GMRES to the residual norm in exact GMRES and give a condition under which this ratio remains close to 1. We illustrate our results with examples in variable floating-point arithmetic.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-08-16T07:00:00Z
      DOI: 10.1137/20M1320018
      Issue No: Vol. 43, No. 3 (2022)
       
  • Factorization of Singular Matrix Polynomials and Matrices with Circular
           Higher Rank Numerical Ranges

    • Free pre-print version: Loading...

      Authors: Edward Poon, Ilya M. Spitkovsky, Hugo J. Woerdeman
      Pages: 1423 - 1439
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1423-1439, September 2022.
      Factorization of regular Hermitian valued trigonometric polynomials (on the unit circle) and Hermitian valued polynomials (on the real line) have been studied well. In this paper we drop the condition of regularity and study factorization of singular Hermitian valued (trigonometric) polynomials. We subsequently apply the results to obtain a characterization of matrices with a circular higher rank numerical range and derive a new version of Anderson's theorem. As a special case, we obtain a new characterization of matrices with a circular numerical range.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-08-18T07:00:00Z
      DOI: 10.1137/22M1475934
      Issue No: Vol. 43, No. 3 (2022)
       
  • RidgeSketch: A Fast Sketching Based Solver for Large Scale Ridge
           Regression

    • Free pre-print version: Loading...

      Authors: Nidham Gazagnadou, Mark Ibrahim, Robert M. Gower
      Pages: 1440 - 1468
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1440-1468, September 2022.
      We propose new variants of the sketch-and-project method for solving large scale ridge regression problems. First, we propose a new momentum alternative and provide a theorem showing it can speed up the convergence of sketch-and-project, through a fast sublinear convergence rate. We carefully delimit under what settings this new sublinear rate is faster than the previously known linear rate of convergence of sketch-and-project without momentum. Second, we consider combining the sketch-and-project method with new modern sketching methods such as Count sketch, SubCount sketch (a new method we propose), and subsampled Hadamard transforms. We show experimentally that when combined with the sketch-and-project method, the (Sub)Count sketch is very effective on sparse data and the standard Subsample sketch is effective on dense data. Indeed, we show that these sketching methods, combined with our new momentum scheme, result in methods that are competitive even when compared to the conjugate gradient method on real large scale data. On the contrary, we show the subsampled Hadamard transform does not perform well in this setting, despite the use of fast Hadamard transforms, and nor do recently proposed acceleration schemes work well in practice. To support all of our experimental findings, and invite the community to validate and extend our results, with this paper we are also releasing an open source software package: RidgeSketch. We designed this object-oriented package in Python for testing sketch-and-project methods and benchmarking ridge regression solvers. RidgeSketch is highly modular, and new sketching methods can easily be added as subclasses. We provide code snippets of our package in the appendix.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-08-22T07:00:00Z
      DOI: 10.1137/21M1422963
      Issue No: Vol. 43, No. 3 (2022)
       
  • On the Condition Number of the Shifted Real Ginibre Ensemble

    • Free pre-print version: Loading...

      Authors: Giorgio Cipolloni, László Erdös, Dominik Schröder
      Pages: 1469 - 1487
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1469-1487, September 2022.
      We derive an accurate lower tail estimate on the lowest singular value $\sigma_1(X-z)$ of a real Gaussian (Ginibre) random matrix $X$ shifted by a complex parameter $z$. Such shift effectively changes the upper tail behavior of the condition number $\kappa(X-z)$ from the slower $(\kappa(X-z)\ge t)\lesssim 1/t$ decay typical for real Ginibre matrices to the faster $1/t^2$ decay seen for complex Ginibre matrices as long as $z$ is away from the real axis. This sharpens and resolves a recent conjecture in [J. Banks et al., https://arxiv.org/abs/2005.08930, 2020] on the regularizing effect of the real Ginibre ensemble with a genuinely complex shift. As a consequence we obtain an improved upper bound on the eigenvalue condition numbers (known also as the eigenvector overlaps) for real Ginibre matrices. The main technical tool is a rigorous supersymmetric analysis from our earlier work [Probab. Math. Phys., 1 (2020), pp. 101--146].
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-08-30T07:00:00Z
      DOI: 10.1137/21M1424408
      Issue No: Vol. 43, No. 3 (2022)
       
  • Practical Leverage-Based Sampling for Low-Rank Tensor Decomposition

    • Free pre-print version: Loading...

      Authors: Brett W. Larsen, Tamara G. Kolda
      Pages: 1488 - 1517
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1488-1517, September 2022.
      The low-rank canonical polyadic tensor decomposition is useful in data analysis and can be computed by solving a sequence of overdetermined least squares subproblems. Motivated by consideration of sparse tensors, we propose sketching each subproblem using leverage scores to select a subset of the rows, with probabilistic guarantees on the solution accuracy. We randomly sample rows proportional to leverage score upper bounds that can be efficiently computed using the special Khatri--Rao subproblem structure inherent in tensor decomposition. Crucially, for a $(d+1)$-way tensor, the number of rows in the sketched system is $O(r^d/\epsilon)$ for a decomposition of rank $r$ and $\epsilon$-accuracy in the least squares solve, independent of both the size and the number of nonzeros in the tensor. Along the way, we provide a practical solution to the generic matrix sketching problem of sampling overabundance for high-leverage-score rows, proposing to include such rows deterministically and combine repeated samples in the sketched system; we conjecture that this can lead to improved theoretical bounds. Numerical results on real-world large-scale tensors show the method is significantly faster than deterministic methods at nearly the same level of accuracy.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-08-30T07:00:00Z
      DOI: 10.1137/21M1441754
      Issue No: Vol. 43, No. 3 (2022)
       
  • Lipschitz Analysis of Generalized Phase Retrievable Matrix Frames

    • Free pre-print version: Loading...

      Authors: Radu Balan, Chris B. Dock
      Pages: 1518 - 1571
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 1518-1571, September 2022.
      The classical phase retrieval problem arises in contexts ranging from speech recognition to x-ray crystallography and quantum state tomography. The generalization to matrix frames is natural in the sense that it corresponds to quantum tomography of impure states. We provide computable global stability bounds for the quasi-linear analysis map $\beta$ and a path forward for understanding related problems in terms of the differential geometry of key spaces. In particular, we manifest a Whitney stratification of the positive semidefinite matrices of low rank which allows us to “stratify” the computation of the global stability bound. We show that for the impure state case no such global stability bounds can be obtained for the nonlinear analysis map $\alpha$ with respect to certain natural distance metrics. Finally, our computation of the global lower Lipschitz constant for the $\beta$ analysis map provides novel conditions for a frame to be generalized phase retrievable.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-09-08T07:00:00Z
      DOI: 10.1137/21M1435446
      Issue No: Vol. 43, No. 3 (2022)
       
  • Geometric Inexact Newton Method for Generalized Singular Values of
           Grassmann Matrix Pair

    • Free pre-print version: Loading...

      Authors: Wei-Wei Xu, Michael K. Ng, Zheng-Jian Bai
      Pages: 535 - 560
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 535-560, June 2022.
      In this paper, we first give new model formulations for computing arbitrary generalized singular value of a Grassmann matrix pair or a real matrix pair. In these new formulations, we need to solve matrix optimization problems with unitary constraints or orthogonal constraints. We propose a geometric inexact Newton--conjugate gradient (Newton-CG) method for solving the resulting matrix optimization problems. Under some mild assumptions, we establish the global and quadratic convergence of the proposed method for the complex case. Some numerical examples are given to illustrate the effectiveness and high accuracy of the proposed method.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-04-05T07:00:00Z
      DOI: 10.1137/20M1383720
      Issue No: Vol. 43, No. 2 (2022)
       
  • A Boundary-Layer Preconditioner for Singularly Perturbed Convection
           Diffusion

    • Free pre-print version: Loading...

      Authors: Scott P. MacLachlan, Niall Madden, Thái Anh Nhan
      Pages: 561 - 583
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 561-583, June 2022.
      Motivated by a wide range of real-world problems whose solutions exhibit boundary and interior layers, the numerical analysis of discretizations of singularly perturbed differential equations is an established subdiscipline within the study of the numerical approximation of solutions to differential equations. Consequently, much is known about how to accurately and stably discretize such equations on a priori adapted meshes in order to properly resolve the layer structure present in their continuum solutions. However, despite being a key step in the numerical simulation process, much less is known about the efficient and accurate solution of the linear systems of equations corresponding to these discretizations. In this paper, we discuss problems associated with the application of direct solvers to these discretizations, and we propose a preconditioning strategy that is tuned to the matrix structure induced by using layer-adapted meshes for convection-diffusion equations, proving a strong condition-number bound on the preconditioned system in one spatial dimension and a weaker bound in two spatial dimensions. Numerical results confirm the efficiency of the resulting preconditioners in one and two dimensions, with time-to-solution of less than one second for representative problems on 1024 x 1024 meshes and up to 40x speedup over standard sparse direct solvers.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-04-13T07:00:00Z
      DOI: 10.1137/21M1443297
      Issue No: Vol. 43, No. 2 (2022)
       
  • Theoretical and Computable Optimal Subspace Expansions for Matrix
           Eigenvalue Problems

    • Free pre-print version: Loading...

      Authors: Zhongxiao Jia
      Pages: 584 - 604
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 584-604, June 2022.
      Consider the optimal subspace expansion problem for the matrix eigenvalue problem $Ax=\lambda x$: Which vector $w$ in the current subspace $\mathcal{V}$, after multiplied by $A$, provides an optimal subspace expansion for approximating a desired eigenvector $x$ in the sense that $x$ has the smallest angle with the expanded subspace $\mathcal{V}_w=\mathcal{V}+{span}\{Aw\}$, i.e., $w_{opt}=\arg\max_{w\in\mathcal{V}}\cos\angle(\mathcal{V}_w,x)$' This problem is important as many iterative methods construct nested subspaces that successively expand $\mathcal{V}$ to $\mathcal{V}_w$. An expression of $w_{opt}$ by Ye [Linear Algebra Appl., 428 (2008), pp. 911--918] for $A$ general, but it could not be exploited to construct a computable (nearly) optimally expanded subspace. Ye turns to deriving a maximization characterization of $\cos\angle(\mathcal{V}_w,x)$ for a given $w\in \mathcal{V}$ when $A$ is Hermitian. We generalize Ye's maximization characterization to the general case and find its maximizer. Our main contributions consist of explicit expressions of $w_{opt}$, $(I-P_V)Aw_{opt}$ and the optimally expanded subspace $\mathcal{V}_{w_{opt}}$ for $A$ general, where $P_V$ is the orthogonal projector onto $\mathcal{V}$. These results are fully exploited to obtain computable optimally expanded subspaces within the framework of the standard, harmonic, refined, and refined harmonic Rayleigh--Ritz methods. We show how to efficiently implement the proposed subspace expansion approaches. Numerical experiments demonstrate the effectiveness of our computable optimal expansions.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-04-13T07:00:00Z
      DOI: 10.1137/20M1331032
      Issue No: Vol. 43, No. 2 (2022)
       
  • Quantile-Based Iterative Methods for Corrupted Systems of Linear Equations

    • Free pre-print version: Loading...

      Authors: Jamie Haddock, Deanna Needell, Elizaveta Rebrova, William Swartworth
      Pages: 605 - 637
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 605-637, June 2022.
      Often in applications ranging from medical imaging and sensor networks to error correction and data science (and beyond), one needs to solve large-scale linear systems in which a fraction of the measurements have been corrupted. We consider solving such large-scale systems of linear equations $Ax = b$ that are inconsistent due to corruptions in the measurement vector $b$. We develop several variants of iterative methods that converge to the solution of the uncorrupted system of equations, even in the presence of large corruptions. These methods make use of a quantile of the absolute values of the residual vector in determining the iterate update. We present both theoretical and empirical results that demonstrate the promise of these iterative approaches.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-04-13T07:00:00Z
      DOI: 10.1137/21M1429187
      Issue No: Vol. 43, No. 2 (2022)
       
  • Mixed Precision Recursive Block Diagonalization for Bivariate Functions of
           Matrices

    • Free pre-print version: Loading...

      Authors: Stefano Massei, Leonardo Robol
      Pages: 638 - 660
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 638-660, June 2022.
      Various numerical linear algebra problems can be formulated as evaluating bivariate function of matrices. The most notable examples are the Fréchet derivative along a direction, the evaluation of (univariate) functions of Kronecker-sum-structured matrices, and the solution of, Sylvester matrix equations. In this work, we propose a recursive block diagonalization algorithm for computing bivariate functions of matrices of small to medium size, for which dense linear algebra is appropriate. The algorithm combines a blocking strategy, as in the Schur--Parlett scheme, and an evaluation procedure for the diagonal blocks. We discuss two implementations of the latter. The first is a natural choice based on Taylor expansions, whereas the second is derivative-free and relies on a multiprecision perturb-and-diagonalize approach. In particular, the appropriate use of multiprecision guarantees backward stability without affecting the efficiency in the generic case. This makes the second approach more robust. The whole method has cubic complexity, and it is closely related to the well-known Bartels--Stewart algorithm for Sylvester matrix equations when applied to $f(x,y)=\frac{1}{x+y}$. We validate the performances of the proposed numerical method on several problems with different conditioning properties.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-04-19T07:00:00Z
      DOI: 10.1137/21M1407872
      Issue No: Vol. 43, No. 2 (2022)
       
  • A Block Bidiagonalization Method for Fixed-Accuracy Low-Rank Matrix
           Approximation

    • Free pre-print version: Loading...

      Authors: Eric Hallman
      Pages: 661 - 680
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 661-680, June 2022.
      We present randUBV, a randomized algorithm for matrix sketching based on the block Lanzcos bidiagonalization process. Given a matrix $\mathbb{A}$, it produces a low-rank approximation of the form $\mathbb{UBV}^T$, where $\mathbb{U}$ and $\mathbb{V}$ have orthonormal columns in exact arithmetic and $\mathbb{B}$ is block bidiagonal. In finite precision, the columns of both $\mathbb{U}$ and $\mathbb{V}$ will be close to orthonormal. Our algorithm is closely related to the randQB algorithms of Yu, Gu, and Li [SIAM J. Matrix Anal. Appl., 39 (2018), pp. 1339--1359]. in that the entries of $\mathbb{B}$ are incrementally generated and the Frobenius norm approximation error may be efficiently estimated. It is therefore suitable for the fixed-accuracy problem and so is designed to terminate as soon as a user input error tolerance is reached. Numerical experiments suggest that the block Lanczos method is generally competitive with or superior to algorithms that use power iteration, even when $\mathbb{A}$ has significant clusters of singular values.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-04-27T07:00:00Z
      DOI: 10.1137/21M1397866
      Issue No: Vol. 43, No. 2 (2022)
       
  • Deflation for the Symmetric Arrowhead and Diagonal-Plus-Rank-One
           Eigenvalue Problems

    • Free pre-print version: Loading...

      Authors: Jesse L. Barlow, Stanley C. Eisenstat, Nevena Jakovčević Stor, Ivan Slapnicar
      Pages: 681 - 709
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 681-709, June 2022.
      We discuss the eigenproblem for the symmetric arrowhead matrix $C = (\begin{smallmatrix} D \& {z} {z}^T & \alpha \end{smallmatrix})$, where $D \in \mathbb{R}^{n \times n}$ is diagonal, ${z} \in \mathbb{R}^n$, and $\alpha \in \mathbb{R}$, in order to examine criteria for when components of ${z}$ may be set to zero. We show that whenever two eigenvalues of $C$ are sufficiently close, some component of ${z}$ may be deflated to zero, without significantly perturbing the eigenvalues of $C$, by either substituting zero for that component or performing a Givens rotation on each side of $C$. The strategy for this deflation requires ${\mathcal{O}(n^2)}$ comparisons. Although it is too costly for many applications, when we use it as a benchmark, we can analyze the effectiveness of ${{O}(n)}$ heuristics that are more practical approaches to deflation. We show that one such ${\mathcal{O}(n)}$ heuristic finds all sets of three or more nearby eigenvalues, misses sets of two or more nearby eigenvalues under limited circumstances, and produces a reduced matrix whose eigenvalues are distinct in double the working precision. Using the ${\mathcal{O}(n)}$ heuristic, we develop a more aggressive method for finding converged eigenvalues in the symmetric Lanczos algorithm. It is shown that except for pathological exceptions, the ${\mathcal{O}(n)}$ heuristic finds nearly as much deflation as the ${\mathcal{O}(n^2)}$ algorithm that reduces an arrowhead matrix to one that cannot be deflated further. The deflation algorithms and their analysis are extended to the symmetric diagonal-plus-rank-one eigenvalue problem and lead to a better deflation strategy for the LAPACK routine dstedc.f.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-05-02T07:00:00Z
      DOI: 10.1137/21M139205X
      Issue No: Vol. 43, No. 2 (2022)
       
  • A Block Minimum Residual Norm Subspace Solver with Partial Convergence
           Management for Sequences of Linear Systems

    • Free pre-print version: Loading...

      Authors: Luc Giraud, Yan-Fei Jing, Yanfei Xiang
      Pages: 710 - 739
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 710-739, June 2022.
      We are concerned with the iterative solution of linear systems with multiple right-hand sides available one group after another with possibly slowly varying left-hand sides. For such sequences of linear systems, we first develop a new block minimum norm residual approach that combines two main ingredients. The first component exploits ideas from GCRO-DR [Parks et al., SIAM J. Sci. Comput., 28 (2006), pp. 1651--1674], enabling us to recycle information from one solve to the next. The second component is the numerical mechanism for managing the partial convergence of the right-hand sides, referred to as inexact breakdown detection in IB-BGMRES [Robbé and Sadkane, Linear Algebra Appl., 419 (2006), pp. 265--285], that enables the monitoring of the rank deficiency in the residual space basis expanded blockwise. Next, for the class of block minimum norm residual approaches that relies on a block Arnoldi-like equality between the search space and the residual space (e.g., any block GMRES or block GCRO variants), we introduce new search space expansion policies defined on novel criteria to detect the partial convergence. These novel detection criteria are tuned to the selected stopping criterion and targeted convergence threshold to best cope with the selected normwise backward error stopping criterion, enabling us to monitor the computational effort while ensuring the final accuracy of each individual solution. Numerical experiments are reported to illustrate the numerical and computational features of both the new block Krylov solvers and the new search space block expansion polices.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-05-03T07:00:00Z
      DOI: 10.1137/21M1401127
      Issue No: Vol. 43, No. 2 (2022)
       
  • Estimation of Structured Distances to Singularity for Matrix Pencils with
           Symmetry Structures: A Linear Algebra--Based Approach

    • Free pre-print version: Loading...

      Authors: Anshul Prajapati, Punit Sharma
      Pages: 740 - 763
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 740-763, June 2022.
      We study the structured distance to singularity for a given regular matrix pencil $A+sE$, where $(A,E)\in \mathbb S \subseteq (\mathbb C^{n,n})^2$. This includes Hermitian, skew-Hermitian, $*$-even, $*$-odd, $*$-palindromic, T-palindromic, and dissipative Hamiltonian pencils. We present a purely linear algebra-based approach to derive explicit computable formulas for the distance to the nearest structured pencil $(A-\Delta_A)+s(E-\Delta_E)$ such that $A-\Delta_A$ and $E-\Delta_E$ have a common null vector. We then obtain a family of computable lower bounds for the unstructured and structured distances to singularity. Numerical experiments suggest that in many cases, there is a significant difference between structured and unstructured distances. This approach extends to structured matrix polynomials with higher degrees.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-05-05T07:00:00Z
      DOI: 10.1137/21M1423269
      Issue No: Vol. 43, No. 2 (2022)
       
  • Linearizable Eigenvector Nonlinearities

    • Free pre-print version: Loading...

      Authors: Rob Claes, Elias Jarlebring, Karl Meerbergen, Parikshit Upadhyaya
      Pages: 764 - 786
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 764-786, June 2022.
      We present a method to linearize, without approximation, a specific class of eigenvalue problems with eigenvector nonlinearities (NEPv), where the nonlinearities are expressed by scalar functions that are defined by a quotient of linear functions of the eigenvector. The exact linearization relies on an equivalent multiparameter eigenvalue problem (MEP) that contains the exact solutions of the NEPv. Due to the characterization of MEPs in terms of a generalized eigenvalue problem this provides a direct way to compute all NEPv solutions for small problems, and it opens up the possibility to develop locally convergent iterative methods for larger problems. Moreover, the linear formulation allows us to easily determine the number of solutions of the NEPv. We propose two numerical schemes that exploit the structure of the linearization: inverse iteration and residual inverse iteration. We show how symmetry in the MEP can be used to improve reliability and reduce computational cost of both methods. Two numerical examples verify the theoretical results, and a third example shows the potential of a hybrid scheme that is based on a combination of the two proposed methods.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-05-10T07:00:00Z
      DOI: 10.1137/21M142931X
      Issue No: Vol. 43, No. 2 (2022)
       
  • Error Bounds for Lanczos-Based Matrix Function Approximation

    • Free pre-print version: Loading...

      Authors: Tyler Chen, Anne Greenbaum, Cameron Musco, Christopher Musco
      Pages: 787 - 811
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 787-811, June 2022.
      We analyze the Lanczos method for matrix function approximation (Lanczos-FA), an iterative algorithm for computing ( f(A) b) when (A) is a Hermitian matrix and (b) is a given vector. Assuming that ( f : \mathbbC \rightarrow \mathbbC) is piecewise analytic, we give a framework, based on the Cauchy integral formula, which can be used to derive a priori and a posteriori error bounds for Lanczos-FA in terms of the error of Lanczos used to solve linear systems. Unlike many error bounds for Lanczos-FA, these bounds account for fine-grained properties of the spectrum of (A), such as clustered or isolated eigenvalues. Our results are derived assuming exact arithmetic, but we show that they are easily extended to finite precision computations using existing theory about the Lanczos algorithm in finite precision. We also provide generalized bounds for the Lanczos method used to approximate quadratic forms (b^H f(A) b) and demonstrate the effectiveness of our bounds with numerical experiments.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-05-12T07:00:00Z
      DOI: 10.1137/21M1427784
      Issue No: Vol. 43, No. 2 (2022)
       
  • Solving the Cubic Regularization Model by a Nested Restarting Lanczos
           Method

    • Free pre-print version: Loading...

      Authors: Xiaojing Jia, Xin Liang, Chungen Shen, Lei-Hong Zhang
      Pages: 812 - 839
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 812-839, June 2022.
      As a variant of the classical trust-region method for unconstrained optimization, the cubic regularization of the Newton method introduces a cubic regularization term in the surrogate objective to adaptively adjust the updating step and deals with cases with both indefinite and definite Hessians. It has been demonstrated that the cubic regularization of the Newton method enjoys a good global convergence and is an efficient solver for the unconstrained minimization. The main computational cost in each iteration is to solve a cubic regularization subproblem. The Newton iteration is a common and efficient method for this task, especially for small- to medium-size problems. For large size problems, a Lanczos type method was proposed in [C. Cartis, N. I. M. Gould, and P. L. Toint, Math. Program., 127 (2011), pp. 245--295]. This method relies on a Lanczos procedure to reduce the large-scale cubic regularization subproblem to a small one and solve it by the Newton iteration. For large and ill-conditioned problems, the Lanczos method still needs to produce a large dimensional subspace to achieve a relatively highly accurate approximation, which declines its performance overall. In this paper, we first show that the cubic regularization subproblem can be equivalently transformed into a quadratic eigenvalue problem, which provides an eigensolver alternative to the Newton iteration. We then establish the convergence of the Lanczos method and also propose a nested restarting version for the large scale and ill-conditioned case. By integrating the nested restarting Lanczos iteration into the cubic regularization of the Newton method, we verify its efficiency for solving large scale minimization problems in CUTEst collection.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-05-23T07:00:00Z
      DOI: 10.1137/21M1436324
      Issue No: Vol. 43, No. 2 (2022)
       
  • New Riemannian Preconditioned Algorithms for Tensor Completion via
           Polyadic Decomposition

    • Free pre-print version: Loading...

      Authors: Shuyu Dong, Bin Gao, Yu Guan, François Glineur
      Pages: 840 - 866
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 840-866, June 2022.
      We propose new Riemannian preconditioned algorithms for low-rank tensor completion via the polyadic decomposition of a tensor. These algorithms exploit a non-Euclidean metric on the product space of the factor matrices of the low-rank tensor in the polyadic decomposition form. This new metric is designed using an approximation of the diagonal blocks of the Hessian of the tensor completion cost function and thus has a preconditioning effect on these algorithms. We prove that the proposed Riemannian gradient descent algorithm globally converges to a stationary point of the tensor completion problem, with convergence rate estimates using the Łojasiewicz property. Numerical results on synthetic and real-world data suggest that the proposed algorithms are more efficient in memory and time compared to state-of-the-art algorithms. Moreover, the proposed algorithms display a greater tolerance for overestimated rank parameters in terms of the tensor recovery performance and thus enable a flexible choice of the rank parameter.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-05-24T07:00:00Z
      DOI: 10.1137/21M1394734
      Issue No: Vol. 43, No. 2 (2022)
       
  • Maximal Rank of ${m}\times{n}\times 2$ Tensors over Arbitrary Fields

    • Free pre-print version: Loading...

      Authors: Xiaoyu Song, Baodong Zheng, Riguang Huang
      Pages: 867 - 881
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 867-881, June 2022.
      In this paper, we completely characterize the maximum rank of $m\times n\times 2$ tensors over an arbitrary field. We note that the maximal tensor rank may be different over different fields, and we find the maximum rank of a given $m\times n\times 2$ tensor is consistent over any field except $\mathbb{F}_2$. We discuss the field in two cases: one is in $\mathbb{F}_2$, and the other is in any field except $\mathbb{F}_2$.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-06-06T07:00:00Z
      DOI: 10.1137/21M1431424
      Issue No: Vol. 43, No. 2 (2022)
       
  • Simultaneous Diagonalization via Congruence of Hermitian Matrices: Some
           Equivalent Conditions and a Numerical Solution

    • Free pre-print version: Loading...

      Authors: Thanh Hieu Le, Thi Ngan Nguyen
      Pages: 882 - 911
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 882-911, June 2022.
      This paper aims at solving the Hermitian SDC problem, which is the simultaneous diagonalization via $*$-congruence of a finite collection of Hermitian matrices. The matrices do not need to pairwise commute. We first provide some equivalent conditions for such a matrix collection to be simultaneously diagonalizable via congruence. Interestingly, one of these conditions requires a positive definite solution to an appropriate system of linear equations over Hermitian matrices. Based on this theoretical result, we propose a polynomial-time algorithm for numerically solving the Hermitian SDC problem. The proposed algorithm is a combination of (1) a detection of whether the initial matrix collection is simultaneously diagonalizable via congruence by solving an appropriate semidefinite program and (2) an existing Jacobi-like algorithm for simultaneously diagonalizing (via congruence) the new collection of commuting Hermitian matrices derived from the previous stage. Illustrating examples and numerical tests with MATLAB are also presented. Consequently, the solvable condition for the SDC problem of arbitrarily square matrices will be obtained by dealing with their Hermitian and skew-Hermitian parts.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-06-07T07:00:00Z
      DOI: 10.1137/21M1390657
      Issue No: Vol. 43, No. 2 (2022)
       
  • $(L_r,L_r,1)$-Decompositions, Sparse Component Analysis, and the Blind
           Separation of Sums of Exponentials

    • Free pre-print version: Loading...

      Authors: Nithin Govindarajan, Ethan N. Epperly, Lieven De Lathauwer
      Pages: 912 - 938
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 912-938, June 2022.
      We derive new uniqueness results for $(L_r,L_r,1)$-type block-term decompositions of third-order tensors by drawing connections to sparse component analysis. It is shown that our uniqueness results have a natural application in the context of the blind source separation problem, since they ensure uniqueness even among $(L_r,L_r,1)$-decompositions with incomparable rank profiles, allowing for stronger separation results for signals consisting of sums of exponentials in the presence of common poles among the source signals. As a byproduct, this line of ideas also suggests a new approach for computing $(L_r,L_r,1)$-decompositions, which proceeds by sequentially computing a canonical polyadic decomposition of the input tensor, followed by performing a sparse factorization on the third factor matrix.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-06-13T07:00:00Z
      DOI: 10.1137/21M1426444
      Issue No: Vol. 43, No. 2 (2022)
       
  • Sketching with Kerdock's Crayons: Fast Sparsifying Transforms for
           Arbitrary Linear Maps

    • Free pre-print version: Loading...

      Authors: Tim Fuchs, David Gross, Felix Krahmer, Richard Kueng, Dustin Mixon
      Pages: 939 - 952
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 939-952, June 2022.
      Given an arbitrary matrix $A\in\mathbb{R}^{m\times n}$, we consider the fundamental problem of computing $Ax$ for any $x\in\mathbb{R}^n$ such that $Ax$ is $s$-sparse. While fast algorithms exist for particular choices of $A$, such as the discrete Fourier transform, there are hardly any approaches that beat standard matrix-vector multiplication for realistic problem dimensions without such structural assumptions. In this paper, we devise a randomized approach to tackle the unstructured case. Our method relies on a representation of $A$ in terms of certain real-valued mutually unbiased bases derived from Kerdock sets. In the preprocessing phase of our algorithm, we compute this representation of $A$ in $O(mn^2\log n + n^2 \log^2n)$ operations. Next, given any unit vector $x\in\mathbb{R}^n$ such that $Ax$ is $s$-sparse, our randomized fast transform uses this representation of $A$ to compute the entrywise $\epsilon$-hard threshold of $Ax$ with high probability in only $O(sn + \epsilon^{-2}\ A\ _{2\to\infty}^2 (m+n)\log m)$ operations. In addition to a performance guarantee, we provide numerical results that demonstrate the plausibility of real-world implementation of our algorithm.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-06-21T07:00:00Z
      DOI: 10.1137/21M1438992
      Issue No: Vol. 43, No. 2 (2022)
       
  • Computing the Riemannian Logarithm on the Stiefel Manifold: Metrics,
           Methods, and Performance

    • Free pre-print version: Loading...

      Authors: Ralf Zimmermann, Knut Hüper
      Pages: 953 - 980
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 953-980, June 2022.
      We address the problem of computing Riemannian normal coordinates on the real, compact Stiefel manifold of orthonormal frames. The Riemannian normal coordinates are based on the so-called Riemannian exponential and the associated Riemannian logarithm map and enable one to transfer almost any computational procedure to the realm of the Stiefel manifold. To compute the Riemannian logarithm is to solve the (local) geodesic endpoint problem. Instead of restricting the consideration to geodesics with respect to a single selected metric, we consider a family of Riemannian metrics introduced by Hüper, Markina, and Silva Leite that includes the Euclidean and the canonical metric as prominent examples. As main contributions, we provide (1) a unified, structured, reduced formula for the Stiefel geodesics. The formula is unified in the sense that it works for the full family of metrics under consideration. It is structured in the sense that it relies on matrix exponentials of skew-symmetric matrices exclusively. It is reduced in relation to the dimension of the matrices of which matrix exponentials have to be calculated. We provide (2) a unified method to tackle the geodesic endpoint problem numerically, and (3) we improve the existing Riemannian log algorithm under the canonical metric in terms of the computational efficiency. The findings are illustrated by means of numerical examples, where the novel algorithms prove to be the most efficient methods known to this date.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-06-24T07:00:00Z
      DOI: 10.1137/21M1425426
      Issue No: Vol. 43, No. 2 (2022)
       
  • On the Change of the Weyr Characteristics of Matrix Pencils After Rank-One
           Perturbations

    • Free pre-print version: Loading...

      Authors: Itziar Baragan͂a, Alicia Roca
      Pages: 981 - 1002
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 981-1002, June 2022.
      The change of the Kronecker structure of a matrix pencil perturbed by another pencil of rank one has been characterized in terms of the homogeneous invariant factors and the chains of column and row minimal indices of the initial and the perturbed pencils. We obtain here a new characterization in terms of the homogeneous invariant factors and the conjugate partitions of the corresponding chains of column and row minimal indices of both pencils. We also define the generalized Weyr characteristic of an arbitrary matrix pencil and obtain bounds for the change of it when the pencil is perturbed by another pencil of rank one. The results improve known results on the problem and hold for arbitrary perturbation pencils of rank one and for any algebraically closed field.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-06-28T07:00:00Z
      DOI: 10.1137/21M1416497
      Issue No: Vol. 43, No. 2 (2022)
       
  • Fast Deterministic Approximation of Symmetric Indefinite Kernel Matrices
           with High Dimensional Datasets

    • Free pre-print version: Loading...

      Authors: Difeng Cai, James Nagy, Yuanzhe Xi
      Pages: 1003 - 1028
      Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 1003-1028, June 2022.
      Kernel methods are used frequently in various applications of machine learning. For large-scale high dimensional applications, the success of kernel methods hinges on the ability to operate certain large dense kernel matrix $K$. An enormous amount of literature has been devoted to the study of symmetric positive semidefinite (SPSD) kernels, where Nyström methods compute a low-rank approximation to the kernel matrix via choosing landmark points. In this paper, we study the Nyström method for approximating both symmetric indefinite kernel matrices as well SPSD ones. We first develop a theoretical framework for general symmetric kernel matrices, which provides a theoretical guidance for the selection of landmark points. We then leverage discrepancy theory to propose the anchor net method for computing accurate Nyström approximations with optimal complexity. The anchor net method operates entirely on the dataset without requiring the access to $K$ or its matrix-vector product. Results on various types of kernels (both indefinite and SPSD ones) and machine learning datasets demonstrate that the new method achieves better accuracy and stability with lower computational cost compared to the state-of-the-art Nyström methods.
      Citation: SIAM Journal on Matrix Analysis and Applications
      PubDate: 2022-06-28T07:00:00Z
      DOI: 10.1137/21M1424627
      Issue No: Vol. 43, No. 2 (2022)
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 44.201.95.84
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-