SIAM Journal on Matrix Analysis and Applications
Journal Prestige (SJR): 1.739 Citation Impact (citeScore): 2 Number of Followers: 3 Hybrid journal * Containing 1 Open Access article(s) in this issue * ISSN (Print) 08954798  ISSN (Online) 10957162 Published by Society for Industrial and Applied Mathematics [17 journals] 
 Refinement of Hottopixx Method for Nonnegative Matrix Factorization Under
Noisy Separability
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Tomohiko Mizutani
Pages: 1029  1057
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 10291057, September 2022.
Hottopixx, proposed by Bittorf et al. at NIPS 2012, is an algorithm for solving nonnegative matrix factorization (NMF) problems under the separability assumption. Separable NMFs have important applications, such as topic extraction from documents and unmixing of hyperspectral images. In such applications, the robustness of the algorithm to noise is the key to success. Hottopixx has been shown to be robust to noise, and its robustness can be further enhanced through postprocessing. However, there is a drawback. Hottopixx and its postprocessing require us to estimate the noise level involved in the matrix we want to factorize before running, since they use it as part of the input data. The noiselevel estimation is not an easy task. In this paper, we overcome this drawback. We present a refinement of Hottopixx and its postprocessing that runs without prior knowledge of the noise level. We show that the refinement has almost the same robustness to noise as the original algorithm.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220707T07:00:00Z
DOI: 10.1137/21M1442206
Issue No: Vol. 43, No. 3 (2022)

 Stable and Efficient Computation of Generalized Polar Decompositions

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Peter Benner, Yuji Nakatsukasa, Carolin Penke
Pages: 1058  1083
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 10581083, September 2022.
We present methods for computing the generalized polar decomposition of a matrix based on the dynamically weighted Halley iteration. This method is well established for computing the standard polar decomposition. A stable implementation is available, where matrix inversion is avoided and QR decompositions are used instead. We establish a natural generalization of this approach for computing generalized polar decompositions with respect to signature matrices. Again the inverse can be avoided by using a generalized QR decomposition called hyperbolic QR decomposition. However, this decomposition does not show the same favorable stability properties as its orthogonal counterpart. We overcome the numerical difficulties by generalizing the CholeskyQR2 method. This method computes the standard QR factorization in a stable way via two successive Cholesky factorizations. An even better numerical stability is achieved by employing permuted graph bases, yielding residuals of order $10^{14}$ even for badly conditioned matrices, where other methods fail.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220711T07:00:00Z
DOI: 10.1137/21M1411986
Issue No: Vol. 43, No. 3 (2022)

 An Algebraic Quantum Circuit Compression Algorithm for Hamiltonian
Simulation
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Daan Camps, Efekan Kökcü, Lindsay Bassman, Wibe A. de Jong, Alexander F. Kemper, Roel Van Beeumen
Pages: 1084  1108
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 10841108, September 2022.
Quantum computing is a promising technology that harnesses the peculiarities of quantum mechanics to deliver computational speedups for some problems that are intractable to solve on a classical computer. Current generation noisy intermediatescale quantum (NISQ) computers are severely limited in terms of chip size and error rates. Shallow quantum circuits with uncomplicated topologies are essential for successful applications in the NISQ era. Based on matrix analysis, we derive localized circuit transformations to efficiently compress quantum circuits for simulation of certain spin Hamiltonians known as free fermions. The depth of the compressed circuits is independent of simulation time and grows linearly with the number of spins. The proposed numerical circuit compression algorithm behaves backward stable and scales cubically in the number of spins enabling circuit synthesis beyond $\mathcal{O}(10^3)$ spins. The resulting quantum circuits have a simple nearestneighbor topology, which makes them ideally suited for NISQ devices.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220711T07:00:00Z
DOI: 10.1137/21M1439298
Issue No: Vol. 43, No. 3 (2022)

 Analytical Singular Value Decomposition for a Class of Stoichiometry
Matrices
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Jacqueline Wentz, Jeffrey C. Cameron, David M. Bortz
Pages: 1109  1147
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 11091147, September 2022.
We present the analytical singular value decomposition of the stoichiometry matrix for a spatially discrete reactiondiffusion system. The motivation for this work is to develop a matrix decomposition that can reveal hidden spatial flux patterns of chemical reactions. We consider a 1D domain with two subregions sharing a single common boundary. Each of the subregions is further partitioned into a finite number of compartments. Chemical reactions can occur within a compartment, whereas diffusion is represented as movement between adjacent compartments. Inspired by biology, we study both (1) the case where the reactions on each side of the boundary are different and only certain species diffuse across the boundary and (2) the case where reactions and diffusion are spatially homogeneous. We write the stoichiometry matrix for these two classes of systems using a Kronecker product formulation. For the first scenario, we apply linear perturbation theory to derive an approximate singular value decomposition in the limit as diffusion becomes much faster than reactions. For the second scenario, we derive an exact analytical singular value decomposition for all relative diffusion and reaction time scales. By writing the stoichiometry matrix using Kronecker products, we show that the singular vectors and values can also be written concisely using Kronecker products. Ultimately, we find that the singular value decomposition of the reactiondiffusion stoichiometry matrix depends on the singular value decompositions of smaller matrices. These smaller matrices represent modified versions of the reactiononly stoichiometry matrices and the analytically known diffusiononly stoichiometry matrix. Lastly, we present the singular value decomposition of the model for the Calvin cycle in cyanobacteria and demonstrate the accuracy of our formulation. The MATLAB code, available at www.github.com/MathBioCU/ReacDiffStoicSVD, provides routines for efficiently calculating the SVD for a given reaction network on a 1D spatial domain.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220713T07:00:00Z
DOI: 10.1137/21M1418927
Issue No: Vol. 43, No. 3 (2022)

 Takagi Factorization of Matrices Depending on Parameters and Locating
Degeneracies of Singular Values
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Luca Dieci, Alessandra Papini, Alessandro Pugliese
Pages: 1148  1161
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 11481161, September 2022.
In this work, we consider the Takagi factorization of a matrix valued function depending on parameters. We give smoothness and genericity results and pay particular attention to the concerns caused by having either a singular value equal to 0 or multiple singular values. For these phenomena, we give theoretical results showing that their codimension is 2, and we further develop and test numerical methods to locate in parameter space values where these occurrences take place. Numerical study of the density of these occurrences is performed.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220713T07:00:00Z
DOI: 10.1137/21M1456273
Issue No: Vol. 43, No. 3 (2022)

 Improved Variants of the Hutch++ Algorithm for Trace Estimation

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: David Persson, Alice Cortinovis, Daniel Kressner
Pages: 1162  1185
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 11621185, September 2022.
This paper is concerned with two improved variants of the Hutch++ algorithm for estimating the trace of a square matrix, implicitly given through matrixvector products. Hutch++ combines randomized lowrank approximation in a first phase with stochastic trace estimation in a second phase. In turn, Hutch++ only requires $O\left(\varepsilon^{1}\right)$ matrixvector products to approximate the trace within a relative error $\varepsilon$ with high probability, provided that the matrix is symmetric positive semidefinite. This compares favorably with the $O\left(\varepsilon^{2}\right)$ matrixvector products needed when using stochastic trace estimation alone. In Hutch++, the number of matrixvector products is fixed a priori and distributed in a prescribed fashion among the two phases. In this work, we derive an adaptive variant of Hutch++, which outputs an estimate of the trace that is within some prescribed error tolerance with a controllable failure probability, while splitting the matrixvector products in a nearoptimal way among the two phases. For the special case of a symmetric positive semidefinite matrix, we present another variant of Hutch++, called NystrÃ¶m++, which utilizes the socalled NystrÃ¶m approximation and requires only one pass over the matrix, as compared to two passes with Hutch++. We extend the analysis of Hutch++ to NystrÃ¶m++. Numerical experiments demonstrate the effectiveness of our two new algorithms.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220713T07:00:00Z
DOI: 10.1137/21M1447623
Issue No: Vol. 43, No. 3 (2022)

 Matrix Pencils with Coefficients that have Positive Semidefinite Hermitian
Parts
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: C. Mehl, V. Mehrmann, M. Wojtylak
Pages: 1186  1212
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 11861212, September 2022.
We analyze when an arbitrary matrix pencil is strictly equivalent to a dissipative Hamiltonian pencil and show that this heavily restricts the spectral properties. In order to relax the spectral properties, we introduce matrix pencils with coefficients that have positive semidefinite Hermitian parts. We relate the Kronecker structure of these pencils to that of an underlying skewHermitian pencil and discuss their regularity, index, numerical range, and location of eigenvalues. Further, we study matrix polynomials with positive semidefinite Hermitian coefficients and use linearizations with positive semidefinite Hermitian parts to derive sufficient conditions for having a spectrum in the left half plane and to derive bounds on the index.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220714T07:00:00Z
DOI: 10.1137/21M1439997
Issue No: Vol. 43, No. 3 (2022)

 Diagonal Scalings for the Eigenstructure of Arbitrary Pencils

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Froilán M. Dopico, María C. Quintana, Paul Van Dooren
Pages: 1213  1237
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 12131237, September 2022.
In this paper we show how to construct diagonal scalings for arbitrary matrix pencils $\lambda BA$, in which both $A$ and $B$ are complex matrices (square or nonsquare). The goal of such diagonal scalings is to “balance” in some sense the row and column norms of the pencil. We see that the problem of scaling a matrix pencil is equivalent to the problem of scaling the row and column sums of a particular nonnegative matrix. However, it is known that there exist square and nonsquare nonnegative matrices that cannot be scaled arbitrarily. To address this issue, we consider an approximate embedded problem, in which the corresponding nonnegative matrix is square and can always be scaled. The new scaling methods are then based on the SinkhornKnopp algorithm for scaling a square nonnegative matrix with total support to be doubly stochastic or on a variant of it. In addition, using results of U. G. Rothblum and H. Schneider [Linear Algebra Appl., 114115 (1989), pp. 737764], we give simple sufficient conditions on the zero pattern for the existence of diagonal scalings of square nonnegative matrices to have any prescribed common vector for the row and column sums. We illustrate numerically that the new scaling techniques for pencils improve the accuracy of the computation of their eigenvalues.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220718T07:00:00Z
DOI: 10.1137/20M1364011
Issue No: Vol. 43, No. 3 (2022)

 Convergence to Singular Triplets in the TwoSided BlockJacobi SVD
Algorithm with Dynamic Ordering
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Gabriel Okša, Yusaku Yamamoto, Marián Vajteršic
Pages: 1238  1262
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 12381262, September 2022.
We study the convergence of computed quantities to singular triplets in the serial and parallel blockJacobi singular value decomposition (SVD) algorithm with dynamic ordering. After eliminating possible zero singular values by two finite decompositions of a matrix $A\in\mathbb{C}^{m\times n},\, m\geq n$, which reduce the matrix dimensions to $n\times n$, it is shown that an iterated nonsingular matrix $A^{(k)}$ converges to a fixed diagonal matrix and its diagonal elements are the singular values of an initial matrix $A$. For the case of simple singular values, it is proved that the corresponding columns of the matrices of accumulated unitary transformations converge to corresponding left and right singular vectors. When a multiple singular value (or a cluster of singular values) is well separated from the other singular values, the convergence of two sequences of appropriate orthogonal projectors towards the orthogonal projectors onto the corresponding left and right subspaces is proved. Additionally, the convergence of orthogonal projectors leads to the convergence of certain computed subspaces towards the singular left and right subspaces spanned by left and right singular vectors corresponding to a multiple singular value or a cluster. An example computed in MATLAB illustrates the developed theory.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220801T07:00:00Z
DOI: 10.1137/21M1411895
Issue No: Vol. 43, No. 3 (2022)

 A Theory of Quantum Subspace Diagonalization

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Ethan N. Epperly, Lin Lin, Yuji Nakatsukasa
Pages: 1263  1290
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 12631290, September 2022.
Quantum subspace diagonalization methods are an exciting new class of algorithms for solving largescale eigenvalue problems using quantum computers. Unfortunately, these methods require the solution of an illconditioned generalized eigenvalue problem, with a matrix pencil corrupted by a nonnegligible amount of noise that is far above the machine precision. Despite pessimistic predictions from classical worstcase perturbation theories, these methods can perform reliably well if the generalized eigenvalue problem is solved using a standard truncation strategy. By leveraging and advancing classical results in matrix perturbation theory, we provide a theoretical analysis of this surprising phenomenon, proving that under certain natural conditions, a quantum subspace diagonalization algorithm can accurately compute the smallest eigenvalue of a large Hermitian matrix. We give numerical experiments demonstrating the effectiveness of the theory and providing practical guidance for the choice of truncation level. Our new results can also be of independent interest to solving eigenvalue problems outside the context of quantum computation.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220801T07:00:00Z
DOI: 10.1137/21M145954X
Issue No: Vol. 43, No. 3 (2022)

 Subspaces Analysis for Random Projection UTV Framework

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Yuan Jian
Pages: 1291  1307
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 12911307, September 2022.
The UTV decompositions are promising and computationally efficient alternatives to the singular value decomposition (SVD), which can provide highquality information about rank, range, and nullspace. However, for largescale matrices, we want more computationally efficient algorithms. Recently, randomized algorithms with their surprising reliability and computational efficiency have become increasingly popular in many application areas. In this paper, we analyze the subspace properties of a random projection UTV framework and give the bounds of subspace distances between the exact and the approximate singular subspaces, the approximate methods including random projection ULV, random projection URV, and random projection SVD. Numerical experiments demonstrate the effectiveness of the proposed bounds.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220804T07:00:00Z
DOI: 10.1137/21M1451592
Issue No: Vol. 43, No. 3 (2022)

 CrossInteractive Residual Smoothing for Global and Block LanczosType
Solvers for Linear Systems with Multiple RightHand Sides
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Kensuke Aihara, Akira Imakura, Keiichi Morikuni
Pages: 1308  1330
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 13081330, September 2022.
Global and block Krylov subspace methods are efficient iterative solvers for large sparse linear systems with multiple righthand sides. However, global or block Lanczostype solvers often exhibit large oscillations in the residual norms and may have a large residual gap relating to the loss of attainable accuracy of the approximations. Conventional residual smoothing schemes suppress these oscillations but cannot improve the attainable accuracy, whereas a recent residual smoothing scheme enables the improvement of the attainable accuracy for single righthandside Lanczostype solvers. The underlying concept of this scheme is that the primary and smoothed sequences of the approximations and residuals influence one another, thereby avoiding the severe propagation of rounding errors. In the present study, we extend this crossinteractive residual smoothing to the case of solving linear systems with multiple righthand sides. The resulting smoothed methods can reduce the residual gap with a low additional cost compared to their original counterparts. We demonstrate the effectiveness of the proposed approach through rounding error analysis and numerical experiments.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220808T07:00:00Z
DOI: 10.1137/21M1436774
Issue No: Vol. 43, No. 3 (2022)

 A Uniform Spectral Analysis for a Preconditioned AllatOnce System from
FirstOrder and SecondOrder Evolutionary Problems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: ShuLin Wu, Tao Zhou, Zhi Zhou
Pages: 1331  1353
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 13311353, September 2022.
Solving evolutionary equations in a parallelintime manner is an attractive topic. The iterative algorithm based on the block $\alpha$circulant preconditioning technique has shown promising advantages, especially for hyperbolic problems. By fast Fourier transform for factorizing the involved circulant matrices, the preconditioned iteration can be computed efficiently via the socalled diagonalization technique, which yields a direct parallel implementation across all time levels. In recent years, considerable efforts have been devoted to exploring the spectral property of the iteration matrix arising from the used timeintegrator, which leads to many casebycase studies. Denoting by $\mathcal{K} $ and $\mathcal{P}_\alpha$ the allatonce matrix of the evolutionary PDEs and the corresponding block $\alpha$circulant preconditioner, we will present a systematic spectral analysis for the matrix $\mathcal{P}_\alpha^{1}\mathcal{K}$ for both the firstorder and secondorder evolutionary problems. For the firstorder problems our analysis works for all stable singlestep timeintegrators, while for the secondorder problems our analysis works for a large class of symmetric twostep methods which could be arbitrarily highorder. Illustrative numerical experiments are presented to complement our theory.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220809T07:00:00Z
DOI: 10.1137/21M145358X
Issue No: Vol. 43, No. 3 (2022)

 Strongly Minimal SelfConjugate Linearizations for Polynomial and Rational
Matrices
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Froilán M. Dopico, María C. Quintana, Paul Van Dooren
Pages: 1354  1381
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 13541381, September 2022.
We prove that we can always construct strongly minimal linearizations of an arbitrary rational matrix from its Laurent expansion around the point at infinity, which happens to be the case for polynomial matrices expressed in the monomial basis. If the rational matrix has a particular selfconjugate structure, we show how to construct strongly minimal linearizations that preserve it. The structures that are considered are the Hermitian and skewHermitian rational matrices with respect to the real line, and the paraHermitian and paraskewHermitian matrices with respect to the imaginary axis. We pay special attention to the construction of strongly minimal linearizations for the particular case of structured polynomial matrices. The proposed constructions lead to efficient numerical algorithms for constructing strongly minimal linearizations. The fact that they are valid for any rational matrix is an improvement on any other previous approach for constructing other classes of structure preserving linearizations, which are not valid for any structured rational or polynomial matrix. The use of the recent concept of strongly minimal linearization is the key for getting such generality. Strongly minimal linearizations are Rosenbrock's polynomial system matrices of the given rational matrix, but with a quadruple of linear polynomial matrices (i.e., pencils): $L(\lambda):=\Big[\begin{array}{ccc} A(\lambda) & B(\lambda) \\ C(\lambda) & D(\lambda) \end{array}\Big]$, where $A(\lambda)$ is regular, and the pencils $ \left[\begin{array}{ccc} A(\lambda) & B(\lambda) \end{array}\right]$ and $ \Big[\begin{array}{ccc} A(\lambda) \\ C(\lambda) \end{array}\Big]$ have no finite or infinite eigenvalues. Strongly minimal linearizations contain the complete information about the zeros, poles, and minimal indices of the rational matrix and allow one to very easily recover its eigenvectors and minimal bases. Thus, they can be combined with algorithms for the generalized eigenvalue problem for computing the complete spectral information of the rational matrix.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220816T07:00:00Z
DOI: 10.1137/21M1453542
Issue No: Vol. 43, No. 3 (2022)

 Infinite GMRES for Parameterized Linear Systems
Open Access Article
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Elias Jarlebring, Siobhán Correnty
Pages: 1382  1405
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 13821405, September 2022.
We consider linear parameterized systems $A(\mu) x(\mu) = b$ for many different $\mu$, where $A$ is large and sparse and depends nonlinearly on $\mu$. Solving such systems individually for each $\mu$ would require great computational effort. In this work we propose to compute a partial parameterization $\tilde{x} \approx x(\mu)$, where $\tilde{x}(\mu)$ is cheap to evaluate for many $\mu$. Our methods are based on the observation that a companion linearization can be formed where the dependence on $\mu$ is only linear. In particular, methods are presented that combine the wellestablished Krylov subspace method for linear systems, GMRES, with algorithms for nonlinear eigenvalue problems (NEPs) to generate a basis for the Krylov subspace. Within this new approach, the basis matrix is constructed in three different ways, using a tensor structure and exploiting that certain problems have lowrank properties. The methods are analyzed analogously to the standard convergence theory for the method GMRES for linear systems. More specifically, the error is estimated based on the magnitude of the parameter $\mu$ and the spectrum of the linear companion matrix, which corresponds to the reciprocal solutions to the corresponding NEP. Numerical experiments illustrate the competitiveness of the methods for largescale problems. The simulations are reproducible and publicly available online.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220816T07:00:00Z
DOI: 10.1137/21M1410324
Issue No: Vol. 43, No. 3 (2022)

 A Note on Inexact Inner Products in GMRES

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Serge Gratton, Ehouarn Simon, David TitleyPeloquin, Philippe L. Toint
Pages: 1406  1422
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 14061422, September 2022.
We show to what extent the accuracy of the inner products computed in the GMRES iterative solver can be reduced as the iterations proceed without affecting the convergence rate or final accuracy achieved by the iterates. We bound the loss of orthogonality in GMRES with inexact inner products. We use this result to bound the ratio of the residual norm in inexact GMRES to the residual norm in exact GMRES and give a condition under which this ratio remains close to 1. We illustrate our results with examples in variable floatingpoint arithmetic.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220816T07:00:00Z
DOI: 10.1137/20M1320018
Issue No: Vol. 43, No. 3 (2022)

 Factorization of Singular Matrix Polynomials and Matrices with Circular
Higher Rank Numerical Ranges
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Edward Poon, Ilya M. Spitkovsky, Hugo J. Woerdeman
Pages: 1423  1439
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 14231439, September 2022.
Factorization of regular Hermitian valued trigonometric polynomials (on the unit circle) and Hermitian valued polynomials (on the real line) have been studied well. In this paper we drop the condition of regularity and study factorization of singular Hermitian valued (trigonometric) polynomials. We subsequently apply the results to obtain a characterization of matrices with a circular higher rank numerical range and derive a new version of Anderson's theorem. As a special case, we obtain a new characterization of matrices with a circular numerical range.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220818T07:00:00Z
DOI: 10.1137/22M1475934
Issue No: Vol. 43, No. 3 (2022)

 RidgeSketch: A Fast Sketching Based Solver for Large Scale Ridge
Regression
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Nidham Gazagnadou, Mark Ibrahim, Robert M. Gower
Pages: 1440  1468
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 14401468, September 2022.
We propose new variants of the sketchandproject method for solving large scale ridge regression problems. First, we propose a new momentum alternative and provide a theorem showing it can speed up the convergence of sketchandproject, through a fast sublinear convergence rate. We carefully delimit under what settings this new sublinear rate is faster than the previously known linear rate of convergence of sketchandproject without momentum. Second, we consider combining the sketchandproject method with new modern sketching methods such as Count sketch, SubCount sketch (a new method we propose), and subsampled Hadamard transforms. We show experimentally that when combined with the sketchandproject method, the (Sub)Count sketch is very effective on sparse data and the standard Subsample sketch is effective on dense data. Indeed, we show that these sketching methods, combined with our new momentum scheme, result in methods that are competitive even when compared to the conjugate gradient method on real large scale data. On the contrary, we show the subsampled Hadamard transform does not perform well in this setting, despite the use of fast Hadamard transforms, and nor do recently proposed acceleration schemes work well in practice. To support all of our experimental findings, and invite the community to validate and extend our results, with this paper we are also releasing an open source software package: RidgeSketch. We designed this objectoriented package in Python for testing sketchandproject methods and benchmarking ridge regression solvers. RidgeSketch is highly modular, and new sketching methods can easily be added as subclasses. We provide code snippets of our package in the appendix.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220822T07:00:00Z
DOI: 10.1137/21M1422963
Issue No: Vol. 43, No. 3 (2022)

 On the Condition Number of the Shifted Real Ginibre Ensemble

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Giorgio Cipolloni, László Erdös, Dominik Schröder
Pages: 1469  1487
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 14691487, September 2022.
We derive an accurate lower tail estimate on the lowest singular value $\sigma_1(Xz)$ of a real Gaussian (Ginibre) random matrix $X$ shifted by a complex parameter $z$. Such shift effectively changes the upper tail behavior of the condition number $\kappa(Xz)$ from the slower $(\kappa(Xz)\ge t)\lesssim 1/t$ decay typical for real Ginibre matrices to the faster $1/t^2$ decay seen for complex Ginibre matrices as long as $z$ is away from the real axis. This sharpens and resolves a recent conjecture in [J. Banks et al., https://arxiv.org/abs/2005.08930, 2020] on the regularizing effect of the real Ginibre ensemble with a genuinely complex shift. As a consequence we obtain an improved upper bound on the eigenvalue condition numbers (known also as the eigenvector overlaps) for real Ginibre matrices. The main technical tool is a rigorous supersymmetric analysis from our earlier work [Probab. Math. Phys., 1 (2020), pp. 101146].
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220830T07:00:00Z
DOI: 10.1137/21M1424408
Issue No: Vol. 43, No. 3 (2022)

 Practical LeverageBased Sampling for LowRank Tensor Decomposition

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Brett W. Larsen, Tamara G. Kolda
Pages: 1488  1517
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 14881517, September 2022.
The lowrank canonical polyadic tensor decomposition is useful in data analysis and can be computed by solving a sequence of overdetermined least squares subproblems. Motivated by consideration of sparse tensors, we propose sketching each subproblem using leverage scores to select a subset of the rows, with probabilistic guarantees on the solution accuracy. We randomly sample rows proportional to leverage score upper bounds that can be efficiently computed using the special KhatriRao subproblem structure inherent in tensor decomposition. Crucially, for a $(d+1)$way tensor, the number of rows in the sketched system is $O(r^d/\epsilon)$ for a decomposition of rank $r$ and $\epsilon$accuracy in the least squares solve, independent of both the size and the number of nonzeros in the tensor. Along the way, we provide a practical solution to the generic matrix sketching problem of sampling overabundance for highleveragescore rows, proposing to include such rows deterministically and combine repeated samples in the sketched system; we conjecture that this can lead to improved theoretical bounds. Numerical results on realworld largescale tensors show the method is significantly faster than deterministic methods at nearly the same level of accuracy.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220830T07:00:00Z
DOI: 10.1137/21M1441754
Issue No: Vol. 43, No. 3 (2022)

 Lipschitz Analysis of Generalized Phase Retrievable Matrix Frames

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Radu Balan, Chris B. Dock
Pages: 1518  1571
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 3, Page 15181571, September 2022.
The classical phase retrieval problem arises in contexts ranging from speech recognition to xray crystallography and quantum state tomography. The generalization to matrix frames is natural in the sense that it corresponds to quantum tomography of impure states. We provide computable global stability bounds for the quasilinear analysis map $\beta$ and a path forward for understanding related problems in terms of the differential geometry of key spaces. In particular, we manifest a Whitney stratification of the positive semidefinite matrices of low rank which allows us to “stratify” the computation of the global stability bound. We show that for the impure state case no such global stability bounds can be obtained for the nonlinear analysis map $\alpha$ with respect to certain natural distance metrics. Finally, our computation of the global lower Lipschitz constant for the $\beta$ analysis map provides novel conditions for a frame to be generalized phase retrievable.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220908T07:00:00Z
DOI: 10.1137/21M1435446
Issue No: Vol. 43, No. 3 (2022)

 Geometric Inexact Newton Method for Generalized Singular Values of
Grassmann Matrix Pair
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: WeiWei Xu, Michael K. Ng, ZhengJian Bai
Pages: 535  560
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 535560, June 2022.
In this paper, we first give new model formulations for computing arbitrary generalized singular value of a Grassmann matrix pair or a real matrix pair. In these new formulations, we need to solve matrix optimization problems with unitary constraints or orthogonal constraints. We propose a geometric inexact Newtonconjugate gradient (NewtonCG) method for solving the resulting matrix optimization problems. Under some mild assumptions, we establish the global and quadratic convergence of the proposed method for the complex case. Some numerical examples are given to illustrate the effectiveness and high accuracy of the proposed method.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220405T07:00:00Z
DOI: 10.1137/20M1383720
Issue No: Vol. 43, No. 2 (2022)

 A BoundaryLayer Preconditioner for Singularly Perturbed Convection
Diffusion
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Scott P. MacLachlan, Niall Madden, Thái Anh Nhan
Pages: 561  583
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 561583, June 2022.
Motivated by a wide range of realworld problems whose solutions exhibit boundary and interior layers, the numerical analysis of discretizations of singularly perturbed differential equations is an established subdiscipline within the study of the numerical approximation of solutions to differential equations. Consequently, much is known about how to accurately and stably discretize such equations on a priori adapted meshes in order to properly resolve the layer structure present in their continuum solutions. However, despite being a key step in the numerical simulation process, much less is known about the efficient and accurate solution of the linear systems of equations corresponding to these discretizations. In this paper, we discuss problems associated with the application of direct solvers to these discretizations, and we propose a preconditioning strategy that is tuned to the matrix structure induced by using layeradapted meshes for convectiondiffusion equations, proving a strong conditionnumber bound on the preconditioned system in one spatial dimension and a weaker bound in two spatial dimensions. Numerical results confirm the efficiency of the resulting preconditioners in one and two dimensions, with timetosolution of less than one second for representative problems on 1024 x 1024 meshes and up to 40x speedup over standard sparse direct solvers.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220413T07:00:00Z
DOI: 10.1137/21M1443297
Issue No: Vol. 43, No. 2 (2022)

 Theoretical and Computable Optimal Subspace Expansions for Matrix
Eigenvalue Problems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Zhongxiao Jia
Pages: 584  604
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 584604, June 2022.
Consider the optimal subspace expansion problem for the matrix eigenvalue problem $Ax=\lambda x$: Which vector $w$ in the current subspace $\mathcal{V}$, after multiplied by $A$, provides an optimal subspace expansion for approximating a desired eigenvector $x$ in the sense that $x$ has the smallest angle with the expanded subspace $\mathcal{V}_w=\mathcal{V}+{span}\{Aw\}$, i.e., $w_{opt}=\arg\max_{w\in\mathcal{V}}\cos\angle(\mathcal{V}_w,x)$' This problem is important as many iterative methods construct nested subspaces that successively expand $\mathcal{V}$ to $\mathcal{V}_w$. An expression of $w_{opt}$ by Ye [Linear Algebra Appl., 428 (2008), pp. 911918] for $A$ general, but it could not be exploited to construct a computable (nearly) optimally expanded subspace. Ye turns to deriving a maximization characterization of $\cos\angle(\mathcal{V}_w,x)$ for a given $w\in \mathcal{V}$ when $A$ is Hermitian. We generalize Ye's maximization characterization to the general case and find its maximizer. Our main contributions consist of explicit expressions of $w_{opt}$, $(IP_V)Aw_{opt}$ and the optimally expanded subspace $\mathcal{V}_{w_{opt}}$ for $A$ general, where $P_V$ is the orthogonal projector onto $\mathcal{V}$. These results are fully exploited to obtain computable optimally expanded subspaces within the framework of the standard, harmonic, refined, and refined harmonic RayleighRitz methods. We show how to efficiently implement the proposed subspace expansion approaches. Numerical experiments demonstrate the effectiveness of our computable optimal expansions.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220413T07:00:00Z
DOI: 10.1137/20M1331032
Issue No: Vol. 43, No. 2 (2022)

 QuantileBased Iterative Methods for Corrupted Systems of Linear Equations

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Jamie Haddock, Deanna Needell, Elizaveta Rebrova, William Swartworth
Pages: 605  637
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 605637, June 2022.
Often in applications ranging from medical imaging and sensor networks to error correction and data science (and beyond), one needs to solve largescale linear systems in which a fraction of the measurements have been corrupted. We consider solving such largescale systems of linear equations $Ax = b$ that are inconsistent due to corruptions in the measurement vector $b$. We develop several variants of iterative methods that converge to the solution of the uncorrupted system of equations, even in the presence of large corruptions. These methods make use of a quantile of the absolute values of the residual vector in determining the iterate update. We present both theoretical and empirical results that demonstrate the promise of these iterative approaches.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220413T07:00:00Z
DOI: 10.1137/21M1429187
Issue No: Vol. 43, No. 2 (2022)

 Mixed Precision Recursive Block Diagonalization for Bivariate Functions of
Matrices
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Stefano Massei, Leonardo Robol
Pages: 638  660
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 638660, June 2022.
Various numerical linear algebra problems can be formulated as evaluating bivariate function of matrices. The most notable examples are the Fréchet derivative along a direction, the evaluation of (univariate) functions of Kroneckersumstructured matrices, and the solution of, Sylvester matrix equations. In this work, we propose a recursive block diagonalization algorithm for computing bivariate functions of matrices of small to medium size, for which dense linear algebra is appropriate. The algorithm combines a blocking strategy, as in the SchurParlett scheme, and an evaluation procedure for the diagonal blocks. We discuss two implementations of the latter. The first is a natural choice based on Taylor expansions, whereas the second is derivativefree and relies on a multiprecision perturbanddiagonalize approach. In particular, the appropriate use of multiprecision guarantees backward stability without affecting the efficiency in the generic case. This makes the second approach more robust. The whole method has cubic complexity, and it is closely related to the wellknown BartelsStewart algorithm for Sylvester matrix equations when applied to $f(x,y)=\frac{1}{x+y}$. We validate the performances of the proposed numerical method on several problems with different conditioning properties.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220419T07:00:00Z
DOI: 10.1137/21M1407872
Issue No: Vol. 43, No. 2 (2022)

 A Block Bidiagonalization Method for FixedAccuracy LowRank Matrix
Approximation
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Eric Hallman
Pages: 661  680
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 661680, June 2022.
We present randUBV, a randomized algorithm for matrix sketching based on the block Lanzcos bidiagonalization process. Given a matrix $\mathbb{A}$, it produces a lowrank approximation of the form $\mathbb{UBV}^T$, where $\mathbb{U}$ and $\mathbb{V}$ have orthonormal columns in exact arithmetic and $\mathbb{B}$ is block bidiagonal. In finite precision, the columns of both $\mathbb{U}$ and $\mathbb{V}$ will be close to orthonormal. Our algorithm is closely related to the randQB algorithms of Yu, Gu, and Li [SIAM J. Matrix Anal. Appl., 39 (2018), pp. 13391359]. in that the entries of $\mathbb{B}$ are incrementally generated and the Frobenius norm approximation error may be efficiently estimated. It is therefore suitable for the fixedaccuracy problem and so is designed to terminate as soon as a user input error tolerance is reached. Numerical experiments suggest that the block Lanczos method is generally competitive with or superior to algorithms that use power iteration, even when $\mathbb{A}$ has significant clusters of singular values.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220427T07:00:00Z
DOI: 10.1137/21M1397866
Issue No: Vol. 43, No. 2 (2022)

 Deflation for the Symmetric Arrowhead and DiagonalPlusRankOne
Eigenvalue Problems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Jesse L. Barlow, Stanley C. Eisenstat, Nevena Jakovčević Stor, Ivan Slapnicar
Pages: 681  709
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 681709, June 2022.
We discuss the eigenproblem for the symmetric arrowhead matrix $C = (\begin{smallmatrix} D \& {z} {z}^T & \alpha \end{smallmatrix})$, where $D \in \mathbb{R}^{n \times n}$ is diagonal, ${z} \in \mathbb{R}^n$, and $\alpha \in \mathbb{R}$, in order to examine criteria for when components of ${z}$ may be set to zero. We show that whenever two eigenvalues of $C$ are sufficiently close, some component of ${z}$ may be deflated to zero, without significantly perturbing the eigenvalues of $C$, by either substituting zero for that component or performing a Givens rotation on each side of $C$. The strategy for this deflation requires ${\mathcal{O}(n^2)}$ comparisons. Although it is too costly for many applications, when we use it as a benchmark, we can analyze the effectiveness of ${{O}(n)}$ heuristics that are more practical approaches to deflation. We show that one such ${\mathcal{O}(n)}$ heuristic finds all sets of three or more nearby eigenvalues, misses sets of two or more nearby eigenvalues under limited circumstances, and produces a reduced matrix whose eigenvalues are distinct in double the working precision. Using the ${\mathcal{O}(n)}$ heuristic, we develop a more aggressive method for finding converged eigenvalues in the symmetric Lanczos algorithm. It is shown that except for pathological exceptions, the ${\mathcal{O}(n)}$ heuristic finds nearly as much deflation as the ${\mathcal{O}(n^2)}$ algorithm that reduces an arrowhead matrix to one that cannot be deflated further. The deflation algorithms and their analysis are extended to the symmetric diagonalplusrankone eigenvalue problem and lead to a better deflation strategy for the LAPACK routine dstedc.f.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220502T07:00:00Z
DOI: 10.1137/21M139205X
Issue No: Vol. 43, No. 2 (2022)

 A Block Minimum Residual Norm Subspace Solver with Partial Convergence
Management for Sequences of Linear Systems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Luc Giraud, YanFei Jing, Yanfei Xiang
Pages: 710  739
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 710739, June 2022.
We are concerned with the iterative solution of linear systems with multiple righthand sides available one group after another with possibly slowly varying lefthand sides. For such sequences of linear systems, we first develop a new block minimum norm residual approach that combines two main ingredients. The first component exploits ideas from GCRODR [Parks et al., SIAM J. Sci. Comput., 28 (2006), pp. 16511674], enabling us to recycle information from one solve to the next. The second component is the numerical mechanism for managing the partial convergence of the righthand sides, referred to as inexact breakdown detection in IBBGMRES [Robbé and Sadkane, Linear Algebra Appl., 419 (2006), pp. 265285], that enables the monitoring of the rank deficiency in the residual space basis expanded blockwise. Next, for the class of block minimum norm residual approaches that relies on a block Arnoldilike equality between the search space and the residual space (e.g., any block GMRES or block GCRO variants), we introduce new search space expansion policies defined on novel criteria to detect the partial convergence. These novel detection criteria are tuned to the selected stopping criterion and targeted convergence threshold to best cope with the selected normwise backward error stopping criterion, enabling us to monitor the computational effort while ensuring the final accuracy of each individual solution. Numerical experiments are reported to illustrate the numerical and computational features of both the new block Krylov solvers and the new search space block expansion polices.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220503T07:00:00Z
DOI: 10.1137/21M1401127
Issue No: Vol. 43, No. 2 (2022)

 Estimation of Structured Distances to Singularity for Matrix Pencils with
Symmetry Structures: A Linear AlgebraBased Approach
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Anshul Prajapati, Punit Sharma
Pages: 740  763
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 740763, June 2022.
We study the structured distance to singularity for a given regular matrix pencil $A+sE$, where $(A,E)\in \mathbb S \subseteq (\mathbb C^{n,n})^2$. This includes Hermitian, skewHermitian, $*$even, $*$odd, $*$palindromic, Tpalindromic, and dissipative Hamiltonian pencils. We present a purely linear algebrabased approach to derive explicit computable formulas for the distance to the nearest structured pencil $(A\Delta_A)+s(E\Delta_E)$ such that $A\Delta_A$ and $E\Delta_E$ have a common null vector. We then obtain a family of computable lower bounds for the unstructured and structured distances to singularity. Numerical experiments suggest that in many cases, there is a significant difference between structured and unstructured distances. This approach extends to structured matrix polynomials with higher degrees.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220505T07:00:00Z
DOI: 10.1137/21M1423269
Issue No: Vol. 43, No. 2 (2022)

 Linearizable Eigenvector Nonlinearities

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Rob Claes, Elias Jarlebring, Karl Meerbergen, Parikshit Upadhyaya
Pages: 764  786
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 764786, June 2022.
We present a method to linearize, without approximation, a specific class of eigenvalue problems with eigenvector nonlinearities (NEPv), where the nonlinearities are expressed by scalar functions that are defined by a quotient of linear functions of the eigenvector. The exact linearization relies on an equivalent multiparameter eigenvalue problem (MEP) that contains the exact solutions of the NEPv. Due to the characterization of MEPs in terms of a generalized eigenvalue problem this provides a direct way to compute all NEPv solutions for small problems, and it opens up the possibility to develop locally convergent iterative methods for larger problems. Moreover, the linear formulation allows us to easily determine the number of solutions of the NEPv. We propose two numerical schemes that exploit the structure of the linearization: inverse iteration and residual inverse iteration. We show how symmetry in the MEP can be used to improve reliability and reduce computational cost of both methods. Two numerical examples verify the theoretical results, and a third example shows the potential of a hybrid scheme that is based on a combination of the two proposed methods.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220510T07:00:00Z
DOI: 10.1137/21M142931X
Issue No: Vol. 43, No. 2 (2022)

 Error Bounds for LanczosBased Matrix Function Approximation

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Tyler Chen, Anne Greenbaum, Cameron Musco, Christopher Musco
Pages: 787  811
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 787811, June 2022.
We analyze the Lanczos method for matrix function approximation (LanczosFA), an iterative algorithm for computing ( f(A) b) when (A) is a Hermitian matrix and (b) is a given vector. Assuming that ( f : \mathbbC \rightarrow \mathbbC) is piecewise analytic, we give a framework, based on the Cauchy integral formula, which can be used to derive a priori and a posteriori error bounds for LanczosFA in terms of the error of Lanczos used to solve linear systems. Unlike many error bounds for LanczosFA, these bounds account for finegrained properties of the spectrum of (A), such as clustered or isolated eigenvalues. Our results are derived assuming exact arithmetic, but we show that they are easily extended to finite precision computations using existing theory about the Lanczos algorithm in finite precision. We also provide generalized bounds for the Lanczos method used to approximate quadratic forms (b^H f(A) b) and demonstrate the effectiveness of our bounds with numerical experiments.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220512T07:00:00Z
DOI: 10.1137/21M1427784
Issue No: Vol. 43, No. 2 (2022)

 Solving the Cubic Regularization Model by a Nested Restarting Lanczos
Method
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Xiaojing Jia, Xin Liang, Chungen Shen, LeiHong Zhang
Pages: 812  839
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 812839, June 2022.
As a variant of the classical trustregion method for unconstrained optimization, the cubic regularization of the Newton method introduces a cubic regularization term in the surrogate objective to adaptively adjust the updating step and deals with cases with both indefinite and definite Hessians. It has been demonstrated that the cubic regularization of the Newton method enjoys a good global convergence and is an efficient solver for the unconstrained minimization. The main computational cost in each iteration is to solve a cubic regularization subproblem. The Newton iteration is a common and efficient method for this task, especially for small to mediumsize problems. For large size problems, a Lanczos type method was proposed in [C. Cartis, N. I. M. Gould, and P. L. Toint, Math. Program., 127 (2011), pp. 245295]. This method relies on a Lanczos procedure to reduce the largescale cubic regularization subproblem to a small one and solve it by the Newton iteration. For large and illconditioned problems, the Lanczos method still needs to produce a large dimensional subspace to achieve a relatively highly accurate approximation, which declines its performance overall. In this paper, we first show that the cubic regularization subproblem can be equivalently transformed into a quadratic eigenvalue problem, which provides an eigensolver alternative to the Newton iteration. We then establish the convergence of the Lanczos method and also propose a nested restarting version for the large scale and illconditioned case. By integrating the nested restarting Lanczos iteration into the cubic regularization of the Newton method, we verify its efficiency for solving large scale minimization problems in CUTEst collection.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220523T07:00:00Z
DOI: 10.1137/21M1436324
Issue No: Vol. 43, No. 2 (2022)

 New Riemannian Preconditioned Algorithms for Tensor Completion via
Polyadic Decomposition
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Shuyu Dong, Bin Gao, Yu Guan, François Glineur
Pages: 840  866
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 840866, June 2022.
We propose new Riemannian preconditioned algorithms for lowrank tensor completion via the polyadic decomposition of a tensor. These algorithms exploit a nonEuclidean metric on the product space of the factor matrices of the lowrank tensor in the polyadic decomposition form. This new metric is designed using an approximation of the diagonal blocks of the Hessian of the tensor completion cost function and thus has a preconditioning effect on these algorithms. We prove that the proposed Riemannian gradient descent algorithm globally converges to a stationary point of the tensor completion problem, with convergence rate estimates using the Łojasiewicz property. Numerical results on synthetic and realworld data suggest that the proposed algorithms are more efficient in memory and time compared to stateoftheart algorithms. Moreover, the proposed algorithms display a greater tolerance for overestimated rank parameters in terms of the tensor recovery performance and thus enable a flexible choice of the rank parameter.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220524T07:00:00Z
DOI: 10.1137/21M1394734
Issue No: Vol. 43, No. 2 (2022)

 Maximal Rank of ${m}\times{n}\times 2$ Tensors over Arbitrary Fields

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Xiaoyu Song, Baodong Zheng, Riguang Huang
Pages: 867  881
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 867881, June 2022.
In this paper, we completely characterize the maximum rank of $m\times n\times 2$ tensors over an arbitrary field. We note that the maximal tensor rank may be different over different fields, and we find the maximum rank of a given $m\times n\times 2$ tensor is consistent over any field except $\mathbb{F}_2$. We discuss the field in two cases: one is in $\mathbb{F}_2$, and the other is in any field except $\mathbb{F}_2$.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220606T07:00:00Z
DOI: 10.1137/21M1431424
Issue No: Vol. 43, No. 2 (2022)

 Simultaneous Diagonalization via Congruence of Hermitian Matrices: Some
Equivalent Conditions and a Numerical Solution
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Thanh Hieu Le, Thi Ngan Nguyen
Pages: 882  911
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 882911, June 2022.
This paper aims at solving the Hermitian SDC problem, which is the simultaneous diagonalization via $*$congruence of a finite collection of Hermitian matrices. The matrices do not need to pairwise commute. We first provide some equivalent conditions for such a matrix collection to be simultaneously diagonalizable via congruence. Interestingly, one of these conditions requires a positive definite solution to an appropriate system of linear equations over Hermitian matrices. Based on this theoretical result, we propose a polynomialtime algorithm for numerically solving the Hermitian SDC problem. The proposed algorithm is a combination of (1) a detection of whether the initial matrix collection is simultaneously diagonalizable via congruence by solving an appropriate semidefinite program and (2) an existing Jacobilike algorithm for simultaneously diagonalizing (via congruence) the new collection of commuting Hermitian matrices derived from the previous stage. Illustrating examples and numerical tests with MATLAB are also presented. Consequently, the solvable condition for the SDC problem of arbitrarily square matrices will be obtained by dealing with their Hermitian and skewHermitian parts.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220607T07:00:00Z
DOI: 10.1137/21M1390657
Issue No: Vol. 43, No. 2 (2022)

 $(L_r,L_r,1)$Decompositions, Sparse Component Analysis, and the Blind
Separation of Sums of Exponentials
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Nithin Govindarajan, Ethan N. Epperly, Lieven De Lathauwer
Pages: 912  938
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 912938, June 2022.
We derive new uniqueness results for $(L_r,L_r,1)$type blockterm decompositions of thirdorder tensors by drawing connections to sparse component analysis. It is shown that our uniqueness results have a natural application in the context of the blind source separation problem, since they ensure uniqueness even among $(L_r,L_r,1)$decompositions with incomparable rank profiles, allowing for stronger separation results for signals consisting of sums of exponentials in the presence of common poles among the source signals. As a byproduct, this line of ideas also suggests a new approach for computing $(L_r,L_r,1)$decompositions, which proceeds by sequentially computing a canonical polyadic decomposition of the input tensor, followed by performing a sparse factorization on the third factor matrix.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220613T07:00:00Z
DOI: 10.1137/21M1426444
Issue No: Vol. 43, No. 2 (2022)

 Sketching with Kerdock's Crayons: Fast Sparsifying Transforms for
Arbitrary Linear Maps
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Tim Fuchs, David Gross, Felix Krahmer, Richard Kueng, Dustin Mixon
Pages: 939  952
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 939952, June 2022.
Given an arbitrary matrix $A\in\mathbb{R}^{m\times n}$, we consider the fundamental problem of computing $Ax$ for any $x\in\mathbb{R}^n$ such that $Ax$ is $s$sparse. While fast algorithms exist for particular choices of $A$, such as the discrete Fourier transform, there are hardly any approaches that beat standard matrixvector multiplication for realistic problem dimensions without such structural assumptions. In this paper, we devise a randomized approach to tackle the unstructured case. Our method relies on a representation of $A$ in terms of certain realvalued mutually unbiased bases derived from Kerdock sets. In the preprocessing phase of our algorithm, we compute this representation of $A$ in $O(mn^2\log n + n^2 \log^2n)$ operations. Next, given any unit vector $x\in\mathbb{R}^n$ such that $Ax$ is $s$sparse, our randomized fast transform uses this representation of $A$ to compute the entrywise $\epsilon$hard threshold of $Ax$ with high probability in only $O(sn + \epsilon^{2}\ A\ _{2\to\infty}^2 (m+n)\log m)$ operations. In addition to a performance guarantee, we provide numerical results that demonstrate the plausibility of realworld implementation of our algorithm.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220621T07:00:00Z
DOI: 10.1137/21M1438992
Issue No: Vol. 43, No. 2 (2022)

 Computing the Riemannian Logarithm on the Stiefel Manifold: Metrics,
Methods, and Performance
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Ralf Zimmermann, Knut Hüper
Pages: 953  980
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 953980, June 2022.
We address the problem of computing Riemannian normal coordinates on the real, compact Stiefel manifold of orthonormal frames. The Riemannian normal coordinates are based on the socalled Riemannian exponential and the associated Riemannian logarithm map and enable one to transfer almost any computational procedure to the realm of the Stiefel manifold. To compute the Riemannian logarithm is to solve the (local) geodesic endpoint problem. Instead of restricting the consideration to geodesics with respect to a single selected metric, we consider a family of Riemannian metrics introduced by Hüper, Markina, and Silva Leite that includes the Euclidean and the canonical metric as prominent examples. As main contributions, we provide (1) a unified, structured, reduced formula for the Stiefel geodesics. The formula is unified in the sense that it works for the full family of metrics under consideration. It is structured in the sense that it relies on matrix exponentials of skewsymmetric matrices exclusively. It is reduced in relation to the dimension of the matrices of which matrix exponentials have to be calculated. We provide (2) a unified method to tackle the geodesic endpoint problem numerically, and (3) we improve the existing Riemannian log algorithm under the canonical metric in terms of the computational efficiency. The findings are illustrated by means of numerical examples, where the novel algorithms prove to be the most efficient methods known to this date.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220624T07:00:00Z
DOI: 10.1137/21M1425426
Issue No: Vol. 43, No. 2 (2022)

 On the Change of the Weyr Characteristics of Matrix Pencils After RankOne
Perturbations
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Itziar Baragan͂a, Alicia Roca
Pages: 981  1002
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 9811002, June 2022.
The change of the Kronecker structure of a matrix pencil perturbed by another pencil of rank one has been characterized in terms of the homogeneous invariant factors and the chains of column and row minimal indices of the initial and the perturbed pencils. We obtain here a new characterization in terms of the homogeneous invariant factors and the conjugate partitions of the corresponding chains of column and row minimal indices of both pencils. We also define the generalized Weyr characteristic of an arbitrary matrix pencil and obtain bounds for the change of it when the pencil is perturbed by another pencil of rank one. The results improve known results on the problem and hold for arbitrary perturbation pencils of rank one and for any algebraically closed field.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220628T07:00:00Z
DOI: 10.1137/21M1416497
Issue No: Vol. 43, No. 2 (2022)

 Fast Deterministic Approximation of Symmetric Indefinite Kernel Matrices
with High Dimensional Datasets
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Difeng Cai, James Nagy, Yuanzhe Xi
Pages: 1003  1028
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 10031028, June 2022.
Kernel methods are used frequently in various applications of machine learning. For largescale high dimensional applications, the success of kernel methods hinges on the ability to operate certain large dense kernel matrix $K$. An enormous amount of literature has been devoted to the study of symmetric positive semidefinite (SPSD) kernels, where Nyström methods compute a lowrank approximation to the kernel matrix via choosing landmark points. In this paper, we study the Nyström method for approximating both symmetric indefinite kernel matrices as well SPSD ones. We first develop a theoretical framework for general symmetric kernel matrices, which provides a theoretical guidance for the selection of landmark points. We then leverage discrepancy theory to propose the anchor net method for computing accurate Nyström approximations with optimal complexity. The anchor net method operates entirely on the dataset without requiring the access to $K$ or its matrixvector product. Results on various types of kernels (both indefinite and SPSD ones) and machine learning datasets demonstrate that the new method achieves better accuracy and stability with lower computational cost compared to the stateoftheart Nyström methods.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220628T07:00:00Z
DOI: 10.1137/21M1424627
Issue No: Vol. 43, No. 2 (2022)
