SIAM Journal on Matrix Analysis and Applications
Journal Prestige (SJR): 1.739 Citation Impact (citeScore): 2 Number of Followers: 3 Hybrid journal (It can contain Open Access articles) ISSN (Print) 08954798  ISSN (Online) 10957162 Published by Society for Industrial and Applied Mathematics [17 journals] 
 Geometric Inexact Newton Method for Generalized Singular Values of
Grassmann Matrix Pair
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: WeiWei Xu, Michael K. Ng, ZhengJian Bai
Pages: 535  560
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 535560, June 2022.
In this paper, we first give new model formulations for computing arbitrary generalized singular value of a Grassmann matrix pair or a real matrix pair. In these new formulations, we need to solve matrix optimization problems with unitary constraints or orthogonal constraints. We propose a geometric inexact Newtonconjugate gradient (NewtonCG) method for solving the resulting matrix optimization problems. Under some mild assumptions, we establish the global and quadratic convergence of the proposed method for the complex case. Some numerical examples are given to illustrate the effectiveness and high accuracy of the proposed method.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220405T07:00:00Z
DOI: 10.1137/20M1383720
Issue No: Vol. 43, No. 2 (2022)

 A BoundaryLayer Preconditioner for Singularly Perturbed Convection
Diffusion
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Scott P. MacLachlan, Niall Madden, Thái Anh Nhan
Pages: 561  583
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 561583, June 2022.
Motivated by a wide range of realworld problems whose solutions exhibit boundary and interior layers, the numerical analysis of discretizations of singularly perturbed differential equations is an established subdiscipline within the study of the numerical approximation of solutions to differential equations. Consequently, much is known about how to accurately and stably discretize such equations on a priori adapted meshes in order to properly resolve the layer structure present in their continuum solutions. However, despite being a key step in the numerical simulation process, much less is known about the efficient and accurate solution of the linear systems of equations corresponding to these discretizations. In this paper, we discuss problems associated with the application of direct solvers to these discretizations, and we propose a preconditioning strategy that is tuned to the matrix structure induced by using layeradapted meshes for convectiondiffusion equations, proving a strong conditionnumber bound on the preconditioned system in one spatial dimension and a weaker bound in two spatial dimensions. Numerical results confirm the efficiency of the resulting preconditioners in one and two dimensions, with timetosolution of less than one second for representative problems on 1024 x 1024 meshes and up to 40x speedup over standard sparse direct solvers.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220413T07:00:00Z
DOI: 10.1137/21M1443297
Issue No: Vol. 43, No. 2 (2022)

 Theoretical and Computable Optimal Subspace Expansions for Matrix
Eigenvalue Problems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Zhongxiao Jia
Pages: 584  604
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 584604, June 2022.
Consider the optimal subspace expansion problem for the matrix eigenvalue problem $Ax=\lambda x$: Which vector $w$ in the current subspace $\mathcal{V}$, after multiplied by $A$, provides an optimal subspace expansion for approximating a desired eigenvector $x$ in the sense that $x$ has the smallest angle with the expanded subspace $\mathcal{V}_w=\mathcal{V}+{span}\{Aw\}$, i.e., $w_{opt}=\arg\max_{w\in\mathcal{V}}\cos\angle(\mathcal{V}_w,x)$' This problem is important as many iterative methods construct nested subspaces that successively expand $\mathcal{V}$ to $\mathcal{V}_w$. An expression of $w_{opt}$ by Ye [Linear Algebra Appl., 428 (2008), pp. 911918] for $A$ general, but it could not be exploited to construct a computable (nearly) optimally expanded subspace. Ye turns to deriving a maximization characterization of $\cos\angle(\mathcal{V}_w,x)$ for a given $w\in \mathcal{V}$ when $A$ is Hermitian. We generalize Ye's maximization characterization to the general case and find its maximizer. Our main contributions consist of explicit expressions of $w_{opt}$, $(IP_V)Aw_{opt}$ and the optimally expanded subspace $\mathcal{V}_{w_{opt}}$ for $A$ general, where $P_V$ is the orthogonal projector onto $\mathcal{V}$. These results are fully exploited to obtain computable optimally expanded subspaces within the framework of the standard, harmonic, refined, and refined harmonic RayleighRitz methods. We show how to efficiently implement the proposed subspace expansion approaches. Numerical experiments demonstrate the effectiveness of our computable optimal expansions.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220413T07:00:00Z
DOI: 10.1137/20M1331032
Issue No: Vol. 43, No. 2 (2022)

 QuantileBased Iterative Methods for Corrupted Systems of Linear Equations

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Jamie Haddock, Deanna Needell, Elizaveta Rebrova, William Swartworth
Pages: 605  637
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 605637, June 2022.
Often in applications ranging from medical imaging and sensor networks to error correction and data science (and beyond), one needs to solve largescale linear systems in which a fraction of the measurements have been corrupted. We consider solving such largescale systems of linear equations $Ax = b$ that are inconsistent due to corruptions in the measurement vector $b$. We develop several variants of iterative methods that converge to the solution of the uncorrupted system of equations, even in the presence of large corruptions. These methods make use of a quantile of the absolute values of the residual vector in determining the iterate update. We present both theoretical and empirical results that demonstrate the promise of these iterative approaches.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220413T07:00:00Z
DOI: 10.1137/21M1429187
Issue No: Vol. 43, No. 2 (2022)

 Mixed Precision Recursive Block Diagonalization for Bivariate Functions of
Matrices
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Stefano Massei, Leonardo Robol
Pages: 638  660
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 638660, June 2022.
Various numerical linear algebra problems can be formulated as evaluating bivariate function of matrices. The most notable examples are the Fréchet derivative along a direction, the evaluation of (univariate) functions of Kroneckersumstructured matrices, and the solution of, Sylvester matrix equations. In this work, we propose a recursive block diagonalization algorithm for computing bivariate functions of matrices of small to medium size, for which dense linear algebra is appropriate. The algorithm combines a blocking strategy, as in the SchurParlett scheme, and an evaluation procedure for the diagonal blocks. We discuss two implementations of the latter. The first is a natural choice based on Taylor expansions, whereas the second is derivativefree and relies on a multiprecision perturbanddiagonalize approach. In particular, the appropriate use of multiprecision guarantees backward stability without affecting the efficiency in the generic case. This makes the second approach more robust. The whole method has cubic complexity, and it is closely related to the wellknown BartelsStewart algorithm for Sylvester matrix equations when applied to $f(x,y)=\frac{1}{x+y}$. We validate the performances of the proposed numerical method on several problems with different conditioning properties.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220419T07:00:00Z
DOI: 10.1137/21M1407872
Issue No: Vol. 43, No. 2 (2022)

 A Block Bidiagonalization Method for FixedAccuracy LowRank Matrix
Approximation
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Eric Hallman
Pages: 661  680
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 661680, June 2022.
We present randUBV, a randomized algorithm for matrix sketching based on the block Lanzcos bidiagonalization process. Given a matrix $\mathbb{A}$, it produces a lowrank approximation of the form $\mathbb{UBV}^T$, where $\mathbb{U}$ and $\mathbb{V}$ have orthonormal columns in exact arithmetic and $\mathbb{B}$ is block bidiagonal. In finite precision, the columns of both $\mathbb{U}$ and $\mathbb{V}$ will be close to orthonormal. Our algorithm is closely related to the randQB algorithms of Yu, Gu, and Li [SIAM J. Matrix Anal. Appl., 39 (2018), pp. 13391359]. in that the entries of $\mathbb{B}$ are incrementally generated and the Frobenius norm approximation error may be efficiently estimated. It is therefore suitable for the fixedaccuracy problem and so is designed to terminate as soon as a user input error tolerance is reached. Numerical experiments suggest that the block Lanczos method is generally competitive with or superior to algorithms that use power iteration, even when $\mathbb{A}$ has significant clusters of singular values.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220427T07:00:00Z
DOI: 10.1137/21M1397866
Issue No: Vol. 43, No. 2 (2022)

 Deflation for the Symmetric Arrowhead and DiagonalPlusRankOne
Eigenvalue Problems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Jesse L. Barlow, Stanley C. Eisenstat, Nevena Jakovčević Stor, Ivan Slapnicar
Pages: 681  709
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 681709, June 2022.
We discuss the eigenproblem for the symmetric arrowhead matrix $C = (\begin{smallmatrix} D \& {z} {z}^T & \alpha \end{smallmatrix})$, where $D \in \mathbb{R}^{n \times n}$ is diagonal, ${z} \in \mathbb{R}^n$, and $\alpha \in \mathbb{R}$, in order to examine criteria for when components of ${z}$ may be set to zero. We show that whenever two eigenvalues of $C$ are sufficiently close, some component of ${z}$ may be deflated to zero, without significantly perturbing the eigenvalues of $C$, by either substituting zero for that component or performing a Givens rotation on each side of $C$. The strategy for this deflation requires ${\mathcal{O}(n^2)}$ comparisons. Although it is too costly for many applications, when we use it as a benchmark, we can analyze the effectiveness of ${{O}(n)}$ heuristics that are more practical approaches to deflation. We show that one such ${\mathcal{O}(n)}$ heuristic finds all sets of three or more nearby eigenvalues, misses sets of two or more nearby eigenvalues under limited circumstances, and produces a reduced matrix whose eigenvalues are distinct in double the working precision. Using the ${\mathcal{O}(n)}$ heuristic, we develop a more aggressive method for finding converged eigenvalues in the symmetric Lanczos algorithm. It is shown that except for pathological exceptions, the ${\mathcal{O}(n)}$ heuristic finds nearly as much deflation as the ${\mathcal{O}(n^2)}$ algorithm that reduces an arrowhead matrix to one that cannot be deflated further. The deflation algorithms and their analysis are extended to the symmetric diagonalplusrankone eigenvalue problem and lead to a better deflation strategy for the LAPACK routine dstedc.f.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220502T07:00:00Z
DOI: 10.1137/21M139205X
Issue No: Vol. 43, No. 2 (2022)

 A Block Minimum Residual Norm Subspace Solver with Partial Convergence
Management for Sequences of Linear Systems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Luc Giraud, YanFei Jing, Yanfei Xiang
Pages: 710  739
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 710739, June 2022.
We are concerned with the iterative solution of linear systems with multiple righthand sides available one group after another with possibly slowly varying lefthand sides. For such sequences of linear systems, we first develop a new block minimum norm residual approach that combines two main ingredients. The first component exploits ideas from GCRODR [Parks et al., SIAM J. Sci. Comput., 28 (2006), pp. 16511674], enabling us to recycle information from one solve to the next. The second component is the numerical mechanism for managing the partial convergence of the righthand sides, referred to as inexact breakdown detection in IBBGMRES [Robbé and Sadkane, Linear Algebra Appl., 419 (2006), pp. 265285], that enables the monitoring of the rank deficiency in the residual space basis expanded blockwise. Next, for the class of block minimum norm residual approaches that relies on a block Arnoldilike equality between the search space and the residual space (e.g., any block GMRES or block GCRO variants), we introduce new search space expansion policies defined on novel criteria to detect the partial convergence. These novel detection criteria are tuned to the selected stopping criterion and targeted convergence threshold to best cope with the selected normwise backward error stopping criterion, enabling us to monitor the computational effort while ensuring the final accuracy of each individual solution. Numerical experiments are reported to illustrate the numerical and computational features of both the new block Krylov solvers and the new search space block expansion polices.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220503T07:00:00Z
DOI: 10.1137/21M1401127
Issue No: Vol. 43, No. 2 (2022)

 Estimation of Structured Distances to Singularity for Matrix Pencils with
Symmetry Structures: A Linear AlgebraBased Approach
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Anshul Prajapati, Punit Sharma
Pages: 740  763
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 740763, June 2022.
We study the structured distance to singularity for a given regular matrix pencil $A+sE$, where $(A,E)\in \mathbb S \subseteq (\mathbb C^{n,n})^2$. This includes Hermitian, skewHermitian, $*$even, $*$odd, $*$palindromic, Tpalindromic, and dissipative Hamiltonian pencils. We present a purely linear algebrabased approach to derive explicit computable formulas for the distance to the nearest structured pencil $(A\Delta_A)+s(E\Delta_E)$ such that $A\Delta_A$ and $E\Delta_E$ have a common null vector. We then obtain a family of computable lower bounds for the unstructured and structured distances to singularity. Numerical experiments suggest that in many cases, there is a significant difference between structured and unstructured distances. This approach extends to structured matrix polynomials with higher degrees.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220505T07:00:00Z
DOI: 10.1137/21M1423269
Issue No: Vol. 43, No. 2 (2022)

 Linearizable Eigenvector Nonlinearities

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Rob Claes, Elias Jarlebring, Karl Meerbergen, Parikshit Upadhyaya
Pages: 764  786
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 764786, June 2022.
We present a method to linearize, without approximation, a specific class of eigenvalue problems with eigenvector nonlinearities (NEPv), where the nonlinearities are expressed by scalar functions that are defined by a quotient of linear functions of the eigenvector. The exact linearization relies on an equivalent multiparameter eigenvalue problem (MEP) that contains the exact solutions of the NEPv. Due to the characterization of MEPs in terms of a generalized eigenvalue problem this provides a direct way to compute all NEPv solutions for small problems, and it opens up the possibility to develop locally convergent iterative methods for larger problems. Moreover, the linear formulation allows us to easily determine the number of solutions of the NEPv. We propose two numerical schemes that exploit the structure of the linearization: inverse iteration and residual inverse iteration. We show how symmetry in the MEP can be used to improve reliability and reduce computational cost of both methods. Two numerical examples verify the theoretical results, and a third example shows the potential of a hybrid scheme that is based on a combination of the two proposed methods.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220510T07:00:00Z
DOI: 10.1137/21M142931X
Issue No: Vol. 43, No. 2 (2022)

 Error Bounds for LanczosBased Matrix Function Approximation

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Tyler Chen, Anne Greenbaum, Cameron Musco, Christopher Musco
Pages: 787  811
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 787811, June 2022.
We analyze the Lanczos method for matrix function approximation (LanczosFA), an iterative algorithm for computing ( f(A) b) when (A) is a Hermitian matrix and (b) is a given vector. Assuming that ( f : \mathbbC \rightarrow \mathbbC) is piecewise analytic, we give a framework, based on the Cauchy integral formula, which can be used to derive a priori and a posteriori error bounds for LanczosFA in terms of the error of Lanczos used to solve linear systems. Unlike many error bounds for LanczosFA, these bounds account for finegrained properties of the spectrum of (A), such as clustered or isolated eigenvalues. Our results are derived assuming exact arithmetic, but we show that they are easily extended to finite precision computations using existing theory about the Lanczos algorithm in finite precision. We also provide generalized bounds for the Lanczos method used to approximate quadratic forms (b^H f(A) b) and demonstrate the effectiveness of our bounds with numerical experiments.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220512T07:00:00Z
DOI: 10.1137/21M1427784
Issue No: Vol. 43, No. 2 (2022)

 InputTailored SystemTheoretic Model Order Reduction for
QuadraticBilinear Systems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Björn LiljegrenSailer, Nicole Marheineke
Pages: 1  39
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 139, January 2022.
In this paper we suggest a moment matching method for quadraticbilinear dynamical systems. Most systemtheoretic reduction methods for nonlinear systems rely on multivariate frequency representations. Our approach instead uses univariate frequency representations tailored toward userpredefined families of inputs. Then moment matching corresponds to a onedimensional interpolation problem, not to multidimensional interpolation as for the multivariate approaches, i.e., it also involves fewer interpolation frequencies to be chosen. Compared to former contributions toward nonlinear model reduction with univariate frequency representations, our approach shows profound differences: Our derivation is more rigorous and general and reveals additional tensorstructured approximation conditions, which should be incorporated. Moreover, the proposed implementation exploits the inherent lowrank tensor structure, which enhances its efficiency. In addition, our approach allows for the incorporation of more general input relations in the state equationsnot only affinelinear ones as in existing systemtheoretic methodsin an elegant way. As a byproduct of the latter, also a novel modification for the multivariate methods falls off, which is able to handle more general inputrelations.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220106T08:00:00Z
DOI: 10.1137/18M1216699
Issue No: Vol. 43, No. 1 (2022)

 Approximate Generalized Inverses with Iterative Refinement for
$\epsilon$Accurate Preconditioning of Singular Systems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Xiangmin Jiao, Qiao Chen
Pages: 40  67
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 4067, January 2022.
We introduce a new class of preconditioners to enable flexible GMRES to find a leastsquares solution, and potentially the pseudoinverse solution, of largescale sparse, asymmetric, singular, and potentially inconsistent systems. We develop the preconditioners based on a new observation that generalized inverses (i.e., ${A}^{g}\in\{{G}\mid{A}{G}{A}={A}\}$) enable the preconditioned Krylov subspaces to converge in a single step. We then compute an approximate generalized inverse (AGI) efficiently using a hybrid incomplete factorization (HIF), which combines multilevel incomplete LU with rankrevealing QR on its final Schur complement. We define the criteria of $\epsilon$accuracy and stability of AGI to guarantee the convergence of preconditioned GMRES for consistent systems. For inconsistent systems, we fortify HIF with iterative refinement to obtain HIFIR, which allows accurate computations of the nullspace vectors. By combining the two techniques, we then obtain a new solver, called PIPIT, for obtaining the pseudoinverse solutions for systems with lowdimensional null spaces. We demonstrate the robustness of HIF and HIFIR and show that they improve both accuracy and efficiency of the prior state of the art by orders of magnitude for systems with up to a million unknowns.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220110T08:00:00Z
DOI: 10.1137/20M1364126
Issue No: Vol. 43, No. 1 (2022)

 Twice Is Enough for Dangerous Eigenvalues

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Andrew Horning, Yuji Nakatsukasa
Pages: 68  93
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 6893, January 2022.
We analyze the stability of a class of eigensolvers that target interior eigenvalues with rational filters. We show that subspace iteration with a rational filter is robust even when an eigenvalue is near a filter's pole. These dangerous eigenvalues contribute to large roundoff errors in the first iteration but are selfcorrecting in later iterations. For matrices with orthogonal eigenvectors (e.g., realsymmetric or complex Hermitian), two iterations are enough to reduce roundoff errors to the order of the unit roundoff. In contrast, Krylov methods accelerated by rational filters with fixed poles typically fail to converge to unit roundoff accuracy when an eigenvalue is close to a pole. In the context of Arnoldi with shiftandinvert enhancement, we demonstrate a simple restart strategy that recovers full precision in the target eigenpairs.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220111T08:00:00Z
DOI: 10.1137/20M1385330
Issue No: Vol. 43, No. 1 (2022)

 Hierarchical Orthogonal Factorization: Sparse Square Matrices

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Abeynaya Gnanasekaran, Eric Darve
Pages: 94  123
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 94123, January 2022.
In this work, we develop a new fast algorithm, spaQRsparsified QRfor solving large, sparse linear systems. The key to our approach lies in using lowrank approximations to sparsify the separators in a nested dissection based Householder QR factorization. First, a modified version of nested dissection is used to identify vertex separators and reorder the matrix. Then, classical Householder QR is used to factorize the separators, going from the leaves to the top of the elimination tree. After every level of separator factorization, we sparsify all the remaining separators by using lowrank approximations. This operation reduces the size of the separators without introducing any fillin in the matrix. However, it introduces a small approximation error which can be controlled by the user. The resulting approximate factorization is stored as a sequence of sparse orthogonal and sparse uppertriangular factors. Hence, it can be applied efficiently to solve linear systems. We further improve the algorithm by using a block diagonal scaling. Then, we show a systematic analysis of the approximation error and effectiveness of the algorithm in solving linear systems. Finally, we perform numerical tests on benchmark unsymmetric problems to evaluate the performance of the algorithm. The factorization time scales as $\mathcal{O}(N \log N)$ and the solve time scales as $\mathcal{O}(N)$.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220124T08:00:00Z
DOI: 10.1137/20M1373475
Issue No: Vol. 43, No. 1 (2022)

 Multidimensional Total Least Squares Problem with Linear Equality
Constraints
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Qiaohua Liu, Zhigang Jia, Yimin Wei
Pages: 124  150
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 124150, January 2022.
Many recent data analysis models are mathematically characterized by a multidimensional total least squares problem with linear equality constraints (TLSE). In this paper, an explicit solution is firstly derived for the multidimensional TLSE problem, as well as the solvability conditions. With applying the perturbation theory of invariant subspace, the multidimensional TLSE problem is proved equivalent to a multidimensional unconstrained weighed total least squares problem in the limit sense. The Kronecker productbased formulae are also given for the normwise, mixed, and componentwise condition numbers of the multidimensional TLSE solution of minimum Frobenius norm, and their computable upper bounds are also provided to reduce the storage and computational cost. All these results are appropriate for the single righthandside case and the multidimensional total least squares problem, which are two especial cases of the multidimensional TLSE problem. In numerical experiments, the multidimensional TLSE model is successfully applied to color image deblurring and denoising for the first time, and the numerical results also indicate the effectiveness of the condition numbers.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220127T08:00:00Z
DOI: 10.1137/21M1400420
Issue No: Vol. 43, No. 1 (2022)

 DivideandConquer Methods for Functions of Matrices with Banded or
Hierarchical LowRank Structure
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Alice Cortinovis, Daniel Kressner, Stefano Massei
Pages: 151  177
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 151177, January 2022.
This work is concerned with approximating matrix functions for banded matrices, hierarchically semiseparable matrices, and related structures. We develop a new divideandconquer method based on (rational) Krylov subspace methods for performing lowrank updates of matrix functions. Our convergence analysis of the newly proposed method proceeds by establishing relations to best polynomial and rational approximation. When only the trace or the diagonal of the matrix function is of interest, we demonstratein practice and in theorythat convergence can be faster. For the special case of a banded matrix, we show that the divideandconquer method reduces to a much simpler algorithm, which proceeds by computing matrix functions of small submatrices. Numerical experiments confirm the effectiveness of the newly developed algorithms for computing largescale matrix functions from a wide variety of applications.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220127T08:00:00Z
DOI: 10.1137/21M1432594
Issue No: Vol. 43, No. 1 (2022)

 Fast Approximation of the $p$Radius, Matrix Pressure, or Generalized
Lyapunov Exponent for Positive and Dominated Matrices
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Ian D. Morris
Pages: 178  198
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 178198, March 2022.
If $A_1,\ldots,A_N$ are real $d \times d$ matrices, then the $p$radius, generalized Lyapunov exponent, or matrix pressure is defined to be the asymptotic exponential growth rate of the sum $\sum_{i_1,\ldots,i_n=1}^N \ A_{i_n}\cdots A_{i_1}\ ^p$, where $p$ is a real parameter. Under its various names this quantity has been investigated for its applications to topics including wavelet regularity and refinement equations, fractal geometry, and the large deviations theory of random matrix products. In this article we present a new algorithm for computing the $p$radius under the hypothesis that the matrices are all positive (or more generally under the hypothesis that they satisfy a weaker condition called domination) and of very low dimension. This algorithm is based on interpreting the $p$radius as the leading eigenvalue of a traceclass operator on a Hilbert space and estimating that eigenvalue via approximations to the Fredholm determinant of the operator. In this respect our method is closely related to the work of Z.Q. Bai and M. Pollicott on computing the top Lyapunov exponent of a random matrix product. For pairs of positive matrices of dimension two our method yields substantial improvements over existing methods.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220210T08:00:00Z
DOI: 10.1137/19M1303964
Issue No: Vol. 43, No. 1 (2022)

 Multiply Accelerated Value Iteration for NonSymmetric Affine Fixed Point
Problems and Application to Markov Decision Processes
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Marianne Akian, Stéphane Gaubert, Zheng Qu, Omar Saadi
Pages: 199  232
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 199232, March 2022.
We analyze a modified version of the Nesterov accelerated gradient algorithm, which applies to affine fixed point problems with nonselfadjoint matrices, such as the ones appearing in the theory of Markov decision processes with discounted or mean payoff criteria. We characterize the spectra of matrices for which this algorithm does converge with an accelerated asymptotic rate. We also introduce a $d$thorder algorithm and show that it yields a multiply accelerated rate under more demanding conditions on the spectrum. We subsequently apply these methods to develop accelerated schemes for nonlinear fixed point problems arising from Markov decision processes. This is illustrated by numerical experiments.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220215T08:00:00Z
DOI: 10.1137/20M1367192
Issue No: Vol. 43, No. 1 (2022)

 Arbitrary Precision Algorithms for Computing the Matrix Cosine and its
Fréchet Derivative
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Awad H. AlMohy, Nicholas J. Higham, Xiaobo Liu
Pages: 233  256
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 233256, March 2022.
Existing algorithms for computing the matrix cosine are tightly coupled to a specific precision of floatingpoint arithmetic for optimal efficiency so they do not conveniently extend to an arbitrary precision environment. We develop an algorithm for computing the matrix cosine that takes the unit roundoff of the working precision as input, and so works in an arbitrary precision. The algorithm employs a Taylor approximation with scaling and recovering and it can be used with a Schur decomposition or in a decompositionfree manner. We also derive a framework for computing the Fréchet derivative, construct an efficient evaluation scheme for computing the cosine and its Fréchet derivative simultaneously in arbitrary precision, and show how this scheme can be extended to compute the matrix sine, cosine, and their Fréchet derivatives all together. Numerical experiments show that the new algorithms behave in a forward stable way over a wide range of precisions. The transformationfree version of the algorithm for computing the cosine is competitive in accuracy with the stateoftheart algorithms in double precision and surpasses existing alternatives in both speed and accuracy in working precisions higher than double.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220217T08:00:00Z
DOI: 10.1137/21M1441043
Issue No: Vol. 43, No. 1 (2022)

 A Sampling Algorithm to Compute the Set of Feasible Solutions for
NonNegative Matrix Factorization with an Arbitrary Rank
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Ragnhild Laursen, Asger Hobolth
Pages: 257  273
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 257273, March 2022.
Nonnegative matrix factorization (NMF) is a useful method to extract features from multivariate data, but an important and sometimes neglected concern is that NMF can result in nonunique solutions. Often, there exist a set of feasible solutions (SFS), which makes it more difficult to interpret the factorization. This problem is especially ignored in cancer genomics, where NMF is used to infer information about the mutational processes present in the evolution of cancer. In this paper the extent of nonuniqueness is investigated for two mutational counts data, and a new sampling algorithm that can find the SFS is introduced. Our sampling algorithm is easy to implement and applies to an arbitrary rank of NMF. This is in contrast to state of the art, where the NMF rank must be smaller than or equal to four. For lower ranks we show that our algorithm performs similar to the polygon inflation algorithm that is developed in relation to chemometrics. Furthermore, we show how the size of the SFS can have a high influence on the appearing variability of a solution. Our sampling algorithm is implemented in the R package SFS (https://github.com/ragnhildlaursen/SFS).
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220222T08:00:00Z
DOI: 10.1137/20M1378971
Issue No: Vol. 43, No. 1 (2022)

 A Recursive Eigenspace Computation for the Canonical Polyadic
Decomposition
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Eric Evert, Michiel Vandecappelle, Lieven De Lathauwer
Pages: 274  300
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 274300, March 2022.
The canonical polyadic decomposition (CPD) is a compact decomposition which expresses a tensor as a sum of its rank1 components. A common step in the computation of a CPD is computing a generalized eigenvalue decomposition (GEVD) of the tensor. A GEVD provides an algebraic approximation of the CPD which can then be used as an initialization in optimization routines. While in the noiseless setting GEVD exactly recovers the CPD, it has recently been shown that pencilbased computations such as GEVD are not stable. In this article we present an algebraic method for approximation of a CPD which greatly improves the accuracy of GEVD. Our method is still fundamentally pencil based; however, rather than using a single pencil and computing all of its generalized eigenvectors, we use many different pencils and in each pencil compute generalized eigenspaces corresponding to sufficiently wellseparated generalized eigenvalues. The resulting “generalized eigenspace decomposition" is significantly more robust to noise than the classical GEVD. Accuracy of the generalized eigenspace decomposition is examined both empirically and theoretically. In particular, we provide a deterministic perturbation theoretic bound which is predictive of error in the computed factorization.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220224T08:00:00Z
DOI: 10.1137/21M1423026
Issue No: Vol. 43, No. 1 (2022)

 Sharp Estimation of Convergence Rate for SelfConsistent Field Iteration
to Solve EigenvectorDependent Nonlinear Eigenvalue Problems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Zhaojun Bai, RenCang Li, Ding Lu
Pages: 301  327
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 301327, March 2022.
We present a comprehensive convergence analysis for the selfconsistent field (SCF) iteration to solve a class of nonlinear eigenvalue problems with eigenvector dependency (NEPvs). Using the tangentangle matrix as an intermediate measure for approximation error, we establish new formulas for two fundamental quantities that characterize the local convergence behavior of the plain SCF: the local contraction factor and the local asymptotic average contraction factor. In comparison with previously established results, new convergence rate estimates provide much sharper bounds on the convergence speed. As an application, we extend the convergence analysis to a popular SCF variantthe levelshifted SCF. The effectiveness of the convergence rate estimates is demonstrated numerically for NEPvs arising from solving the KohnSham equation in electronic structure calculation and the GrossPitaevskii equation for modeling of the BoseEinstein condensation.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220228T08:00:00Z
DOI: 10.1137/20M136606X
Issue No: Vol. 43, No. 1 (2022)

 Guarantees for Existence of a Best Canonical Polyadic Approximation of a
Noisy LowRank Tensor
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Eric Evert, Lieven De Lathauwer
Pages: 328  369
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 328369, March 2022.
The canonical polyadic decomposition (CPD) of a lowrank tensor plays a major role in data analysis and signal processing by allowing for unique recovery of underlying factors. However, it is well known that the lowrank CPD approximation problem is illposed. That is, a tensor may fail to have a best rank $R$ CPD approximation when $R>1$. This article gives deterministic bounds for the existence of best lowrank tensor approximations over ${\mathbb{K}}={\mathbb{R}}$ or ${\mathbb{K}}={\mathbb{C}}$. More precisely, given a tensor ${\mathcal T} \in {\mathbb{K}}^{I \times I \times I}$ of rank $R \leq I$, we compute the radius of a Frobenius norm ball centered at ${\mathcal T}$ in which best ${\mathbb{K}}$rank $R$ approximations are guaranteed to exist. In addition we show that every ${\mathbb{K}}$rank $R$ tensor inside of this ball has a unique canonical polyadic decomposition. This neighborhood may be interpreted as a neighborhood of “mathematical truth" in which CPD approximation and computation are wellposed. In pursuit of these bounds, we describe lowrank tensor decomposition as a “joint generalized eigenvalue" problem. Using this framework, we show that, under mild assumptions, a lowrank tensor which has rank strictly greater than border rank is defective in the sense of algebraic and geometric multiplicities for joint generalized eigenvalues. Bounds for existence of best lowrank approximations are then obtained by establishing perturbation theoretic results for the joint generalized eigenvalue problem. In this way we establish a connection between existence of best lowrank approximations and the tensor spectral norm. In addition we solve a “tensor Procrustes problem" which examines orthogonal compressions for pairs of tensors. The main results of the article are illustrated by a variety of numerical experiments.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220303T08:00:00Z
DOI: 10.1137/20M1381046
Issue No: Vol. 43, No. 1 (2022)

 A New Lower Bound on the Size of the Smallest Vertex Separator of a Graph

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Yongyan Guo, Gang Wu
Pages: 370  376
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 370376, March 2022.
Separator minimization is an important problem in graph partitioning. Although finding an optimum partitioning for a given graph is NPhard, estimating the size of the smallest vertex separator is an interesting problem since it can be used to assess the quality of a vertex separator. In [A. Pothen, H. Simon, and K. Liou, SIAM J. Matrix Anal. Appl., 11 (1990), pp. 430452], two classical lower bounds on the size of the smallest vertex separator of a graph were established. In the present work, we revisit this problem and establish a new and easily computable lower bound on the smallest vertex separator.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220307T08:00:00Z
DOI: 10.1137/20M1382118
Issue No: Vol. 43, No. 1 (2022)

 Singularly Perturbed Markov Modulated Fluid Queues

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Sarah Dendievel, Guy Latouche, Yuanyuan Liu, Yingchun Tang
Pages: 377  404
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 377404, March 2022.
We consider a Markov modulated fluid queue for which the environment is nearly completely decomposable. Under the basic assumption that both the nearly completely decomposable Markov modulated fluid model and the unperturbed fluid models are positive recurrent, we show that the stationary density of the level can be expanded as convergent power series of the aggregated stationary densities. We go further in the analysis by assuming that one or more of the unperturbed fluid queues is not necessarily positive recurrent. We provide numerical illustration.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220307T08:00:00Z
DOI: 10.1137/21M1395387
Issue No: Vol. 43, No. 1 (2022)

 A SymbolBased Analysis for Multigrid Methods for BlockCirculant and
BlockToeplitz Systems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Matthias Bolten, Marco Donatelli, Paola Ferrari, Isabella Furci
Pages: 405  438
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 405438, March 2022.
In the literature, there exist several studies on symbolbased multigrid methods for the solution of linear systems having structured coefficient matrices. In particular, the convergence analysis for such methods has been obtained in an elegant form in the case of Toeplitz matrices generated by a scalarvalued function. In the blockToeplitz setting, that is, in the case where the matrix entries are small generic matrices instead of scalars, some algorithms have already been proposed regarding specific applications, and a first rigorous convergence analysis has been performed in [M. Donatelli et al., Numer. Linear Algebra Appl., 28 (2021), e2356]. However, with the existent symbolbased theoretical tools, it is still not possible to prove the convergence of many multigrid methods known in the literature. This paper aims to generalize the previous results, giving more general sufficient conditions on the symbol of the grid transfer operators. In particular, we treat matrixvalued trigonometric polynomials which can be nondiagonalizable and singular at all points, and we express the new conditions in terms of the eigenvectors associated with the illconditioned subspace. Moreover, we extend the analysis to the Vcycle method, proving a linear convergence rate under stronger conditions, which resemble those given in the scalar case. In order to validate our theoretical findings, we present a classical block structured problem stemming from an FEM approximation of a second order differential problem. We focus on two multigrid strategies that use the geometric and the standard bisection grid transfer operators and prove that both fall into the category of projectors satisfying the proposed conditions. In addition, using a tensor product argument, we provide a strategy to construct efficient Vcycle procedures in the block multilevel setting.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220307T08:00:00Z
DOI: 10.1137/21M1390554
Issue No: Vol. 43, No. 1 (2022)

 Exactly Solving Sparse Rational Linear Systems via RoundoffErrorFree
Cholesky Factorizations
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Christopher J. Lourenco, Erick MorenoCenteno
Pages: 439  463
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 439463, March 2022.
Exactly solving sparse symmetric positive definite (SPD) linear systems is a key problem in mathematics, engineering, and computer science. This paper derives two new sparse roundofferrorfree (REF) Cholesky factorization algorithms which exactly solve sparse SPD linear systems $A {x} = {b}$, where $A \in \mathbb{Q}^{n x n}$ and ${x}, {b} \in {Q}^{n x p}$. The key properties of these factorizations are that (1) they exclusively use integerarithmetic and (2) in the bitcomplexity model, they solve the linear system $A {x} = {b}$ in time proportional to the cost of the integerarithmetic operations. Namely, the overhead related to data structures and ancillary operations (those not strictly required to perform the factorization) is subsumed by the cost of the integerarithmetic operations that are essential/intrinsic to the factorization. Notably, to date our algorithms are the only exact algorithm for solving SPD linear systems with this asymptotically efficient complexity bound. Computationally, we show that the novel factorizations are faster than both sparse rationalarithmetic LDL and sparse exact LU factorization. Altogether, the derived sparse REF Cholesky factorizations present a framework to solve any rational SPD linear system exactly and efficiently.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220314T07:00:00Z
DOI: 10.1137/20M1371592
Issue No: Vol. 43, No. 1 (2022)

 A Measure Concentration Effect for Matrices of High, Higher, and Even
Higher Dimension
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Harry Yserentant
Pages: 464  478
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 464478, March 2022.
Let n>m, and let $A$ be an (m x n)matrix of full rank. Then obviously the estimate $\ Ax\ \leq\ A\ \ x\ $ holds for the euclidean norm of $x$ and $Ax$ and the spectral norm as the assigned matrix norm. We study the sets of all $x$ for which, for fixed $\delta
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220314T07:00:00Z
DOI: 10.1137/20M1376029
Issue No: Vol. 43, No. 1 (2022)

 SemiInfinite Linear Regression and Its Applications

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Paz Fink Shustin, Haim Avron
Pages: 479  511
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 479511, March 2022.
Finite linear least squares is one of the core problems of numerical linear algebra, with countless applications across science and engineering. Consequently, there is a rich and ongoing literature on algorithms for solving linear least squares problems. In this paper, we explore a variant in which the system's matrix has one infinite dimension (i.e., it is a quasimatrix). We call such problems semiinfinite linear regression problems. As we show, the semiinfinite case arises in several applications, such as supervised learning and function approximation, and allows for novel interpretations of existing algorithms. We explore semiinfinite linear regression rigorously and algorithmically. To that end, we give a formal framework for working with quasimatrices, and generalize several algorithms designed for the finite problem to the infinite case. Finally, we suggest the use of various sampling methods for obtaining an approximate solution.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220315T07:00:00Z
DOI: 10.1137/21M1411950
Issue No: Vol. 43, No. 1 (2022)

 A New Analytical Framework for the Convergence of Inexact TwoGrid Methods

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Authors: Xuefeng Xu, ChenSong Zhang
Pages: 512  533
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 512533, March 2022.
Twogrid methods with exact solution of the Galerkin coarsegrid system have been well studied by the multigrid community: an elegant identity has been established to characterize the convergence factor of exact twogrid methods. In practice, however, it is often too costly to solve the Galerkin coarsegrid system exactly, especially when its size is large. Instead, without essential loss of convergence speed, one may solve the coarsegrid system approximately. In this paper, we develop a new framework for analyzing the convergence of inexact twogrid methods: twosided bounds for the energy norm of the error propagation matrix of inexact twogrid methods are presented. In the framework, a restricted smoother involved in the identity for exact twogrid convergence is used to measure how far the actual coarsegrid matrix deviates from the Galerkin one. As an application, we establish a new and unified convergence theory for multigrid methods.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 20220329T07:00:00Z
DOI: 10.1137/21M140448X
Issue No: Vol. 43, No. 1 (2022)
