Subjects -> MATHEMATICS (Total: 1013 journals)     - APPLIED MATHEMATICS (92 journals)    - GEOMETRY AND TOPOLOGY (23 journals)    - MATHEMATICS (714 journals)    - MATHEMATICS (GENERAL) (45 journals)    - NUMERICAL ANALYSIS (26 journals)    - PROBABILITIES AND MATH STATISTICS (113 journals) MATHEMATICS (714 journals)            First | 1 2 3 4
 Showing 601 - 538 of 538 Journals sorted alphabetically Results in Mathematics Results in Nonlinear Analysis Review of Symbolic Logic       (Followers: 2) Reviews in Mathematical Physics       (Followers: 1) Revista Baiana de Educação Matemática Revista Bases de la Ciencia Revista BoEM - Boletim online de Educação Matemática Revista Colombiana de Matemáticas       (Followers: 1) Revista de Ciencias Revista de Educación Matemática Revista de la Escuela de Perfeccionamiento en Investigación Operativa Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales. Serie A. Matematicas Revista de Matemática : Teoría y Aplicaciones       (Followers: 1) Revista Digital: Matemática, Educación e Internet Revista Electrónica de Conocimientos, Saberes y Prácticas Revista Integración : Temas de Matemáticas Revista Internacional de Sistemas Revista Latinoamericana de Etnomatemática Revista Latinoamericana de Investigación en Matemática Educativa Revista Matemática Complutense Revista REAMEC : Rede Amazônica de Educação em Ciências e Matemática Revista SIGMA Ricerche di Matematica RMS : Research in Mathematics & Statistics Royal Society Open Science       (Followers: 7) Russian Journal of Mathematical Physics Russian Mathematics Sahand Communications in Mathematical Analysis Sampling Theory, Signal Processing, and Data Analysis São Paulo Journal of Mathematical Sciences Science China Mathematics       (Followers: 1) Science Progress       (Followers: 1) Sciences & Technologie A : sciences exactes Selecta Mathematica       (Followers: 1) SeMA Journal Semigroup Forum       (Followers: 1) Set-Valued and Variational Analysis SIAM Journal on Applied Mathematics       (Followers: 11) SIAM Journal on Computing       (Followers: 11) SIAM Journal on Control and Optimization       (Followers: 18) SIAM Journal on Discrete Mathematics       (Followers: 8) SIAM Journal on Financial Mathematics       (Followers: 3) SIAM Journal on Mathematics of Data Science       (Followers: 1) SIAM Journal on Matrix Analysis and Applications       (Followers: 3) SIAM Journal on Optimization       (Followers: 12) Siberian Advances in Mathematics Siberian Mathematical Journal Sigmae SILICON SN Partial Differential Equations and Applications Soft Computing       (Followers: 7) Statistics and Computing       (Followers: 13) Stochastic Analysis and Applications       (Followers: 2) Stochastic Partial Differential Equations : Analysis and Computations       (Followers: 1) Stochastic Processes and their Applications       (Followers: 5) Stochastics and Dynamics Studia Scientiarum Mathematicarum Hungarica       (Followers: 1) Studia Universitatis Babeș-Bolyai Informatica Studies In Applied Mathematics       (Followers: 1) Studies in Mathematical Sciences       (Followers: 1) Superficies y vacio Suska Journal of Mathematics Education       (Followers: 1) Swiss Journal of Geosciences       (Followers: 1) Synthesis Lectures on Algorithms and Software in Engineering       (Followers: 2) Synthesis Lectures on Mathematics and Statistics       (Followers: 1) Tamkang Journal of Mathematics Tatra Mountains Mathematical Publications Teaching Mathematics       (Followers: 10) Teaching Mathematics and its Applications: An International Journal of the IMA       (Followers: 4) Teaching Statistics       (Followers: 8) Technometrics       (Followers: 8) The Journal of Supercomputing       (Followers: 1) The Mathematica journal The Mathematical Gazette       (Followers: 1) The Mathematical Intelligencer The Ramanujan Journal The VLDB Journal       (Followers: 2) Theoretical and Mathematical Physics       (Followers: 7) Theory and Applications of Graphs Topological Methods in Nonlinear Analysis Transactions of the London Mathematical Society       (Followers: 1) Transformation Groups Turkish Journal of Mathematics Ukrainian Mathematical Journal Uniciencia Uniform Distribution Theory Unisda Journal of Mathematics and Computer Science Unnes Journal of Mathematics       (Followers: 2) Unnes Journal of Mathematics Education       (Followers: 2) Unnes Journal of Mathematics Education Research       (Followers: 1) Ural Mathematical Journal Vestnik Samarskogo Gosudarstvennogo Tekhnicheskogo Universiteta. Seriya Fiziko-Matematicheskie Nauki Vestnik St. Petersburg University: Mathematics VFAST Transactions on Mathematics       (Followers: 1) Vietnam Journal of Mathematics Vinculum Visnyk of V. N. Karazin Kharkiv National University. Ser. Mathematics, Applied Mathematics and Mechanics       (Followers: 1) Water SA       (Followers: 2) Water Waves Zamm-Zeitschrift Fuer Angewandte Mathematik Und Mechanik       (Followers: 1) ZDM       (Followers: 2) Zeitschrift für angewandte Mathematik und Physik       (Followers: 2) Zeitschrift fur Energiewirtschaft Zetetike

First | 1 2 3 4

Similar Journals
 SIAM Journal on Matrix Analysis and ApplicationsJournal Prestige (SJR): 1.739 Citation Impact (citeScore): 2Number of Followers: 3      Hybrid journal (It can contain Open Access articles) ISSN (Print) 0895-4798 - ISSN (Online) 1095-7162 Published by Society for Industrial and Applied Mathematics  [17 journals]
• Geometric Inexact Newton Method for Generalized Singular Values of
Grassmann Matrix Pair

• Free pre-print version: Loading...

Authors: Wei-Wei Xu, Michael K. Ng, Zheng-Jian Bai
Pages: 535 - 560
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 535-560, June 2022.
In this paper, we first give new model formulations for computing arbitrary generalized singular value of a Grassmann matrix pair or a real matrix pair. In these new formulations, we need to solve matrix optimization problems with unitary constraints or orthogonal constraints. We propose a geometric inexact Newton--conjugate gradient (Newton-CG) method for solving the resulting matrix optimization problems. Under some mild assumptions, we establish the global and quadratic convergence of the proposed method for the complex case. Some numerical examples are given to illustrate the effectiveness and high accuracy of the proposed method.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-04-05T07:00:00Z
DOI: 10.1137/20M1383720
Issue No: Vol. 43, No. 2 (2022)

• A Boundary-Layer Preconditioner for Singularly Perturbed Convection
Diffusion

• Free pre-print version: Loading...

Authors: Scott P. MacLachlan, Niall Madden, Thái Anh Nhan
Pages: 561 - 583
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 561-583, June 2022.
Motivated by a wide range of real-world problems whose solutions exhibit boundary and interior layers, the numerical analysis of discretizations of singularly perturbed differential equations is an established subdiscipline within the study of the numerical approximation of solutions to differential equations. Consequently, much is known about how to accurately and stably discretize such equations on a priori adapted meshes in order to properly resolve the layer structure present in their continuum solutions. However, despite being a key step in the numerical simulation process, much less is known about the efficient and accurate solution of the linear systems of equations corresponding to these discretizations. In this paper, we discuss problems associated with the application of direct solvers to these discretizations, and we propose a preconditioning strategy that is tuned to the matrix structure induced by using layer-adapted meshes for convection-diffusion equations, proving a strong condition-number bound on the preconditioned system in one spatial dimension and a weaker bound in two spatial dimensions. Numerical results confirm the efficiency of the resulting preconditioners in one and two dimensions, with time-to-solution of less than one second for representative problems on 1024 x 1024 meshes and up to 40x speedup over standard sparse direct solvers.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-04-13T07:00:00Z
DOI: 10.1137/21M1443297
Issue No: Vol. 43, No. 2 (2022)

• Theoretical and Computable Optimal Subspace Expansions for Matrix
Eigenvalue Problems

• Free pre-print version: Loading...

Authors: Zhongxiao Jia
Pages: 584 - 604
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 584-604, June 2022.
Consider the optimal subspace expansion problem for the matrix eigenvalue problem $Ax=\lambda x$: Which vector $w$ in the current subspace $\mathcal{V}$, after multiplied by $A$, provides an optimal subspace expansion for approximating a desired eigenvector $x$ in the sense that $x$ has the smallest angle with the expanded subspace $\mathcal{V}_w=\mathcal{V}+{span}\{Aw\}$, i.e., $w_{opt}=\arg\max_{w\in\mathcal{V}}\cos\angle(\mathcal{V}_w,x)$' This problem is important as many iterative methods construct nested subspaces that successively expand $\mathcal{V}$ to $\mathcal{V}_w$. An expression of $w_{opt}$ by Ye [Linear Algebra Appl., 428 (2008), pp. 911--918] for $A$ general, but it could not be exploited to construct a computable (nearly) optimally expanded subspace. Ye turns to deriving a maximization characterization of $\cos\angle(\mathcal{V}_w,x)$ for a given $w\in \mathcal{V}$ when $A$ is Hermitian. We generalize Ye's maximization characterization to the general case and find its maximizer. Our main contributions consist of explicit expressions of $w_{opt}$, $(I-P_V)Aw_{opt}$ and the optimally expanded subspace $\mathcal{V}_{w_{opt}}$ for $A$ general, where $P_V$ is the orthogonal projector onto $\mathcal{V}$. These results are fully exploited to obtain computable optimally expanded subspaces within the framework of the standard, harmonic, refined, and refined harmonic Rayleigh--Ritz methods. We show how to efficiently implement the proposed subspace expansion approaches. Numerical experiments demonstrate the effectiveness of our computable optimal expansions.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-04-13T07:00:00Z
DOI: 10.1137/20M1331032
Issue No: Vol. 43, No. 2 (2022)

• Quantile-Based Iterative Methods for Corrupted Systems of Linear Equations

• Free pre-print version: Loading...

Authors: Jamie Haddock, Deanna Needell, Elizaveta Rebrova, William Swartworth
Pages: 605 - 637
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 605-637, June 2022.
Often in applications ranging from medical imaging and sensor networks to error correction and data science (and beyond), one needs to solve large-scale linear systems in which a fraction of the measurements have been corrupted. We consider solving such large-scale systems of linear equations $Ax = b$ that are inconsistent due to corruptions in the measurement vector $b$. We develop several variants of iterative methods that converge to the solution of the uncorrupted system of equations, even in the presence of large corruptions. These methods make use of a quantile of the absolute values of the residual vector in determining the iterate update. We present both theoretical and empirical results that demonstrate the promise of these iterative approaches.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-04-13T07:00:00Z
DOI: 10.1137/21M1429187
Issue No: Vol. 43, No. 2 (2022)

• Mixed Precision Recursive Block Diagonalization for Bivariate Functions of
Matrices

• Free pre-print version: Loading...

Authors: Stefano Massei, Leonardo Robol
Pages: 638 - 660
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 638-660, June 2022.
Various numerical linear algebra problems can be formulated as evaluating bivariate function of matrices. The most notable examples are the Fréchet derivative along a direction, the evaluation of (univariate) functions of Kronecker-sum-structured matrices, and the solution of, Sylvester matrix equations. In this work, we propose a recursive block diagonalization algorithm for computing bivariate functions of matrices of small to medium size, for which dense linear algebra is appropriate. The algorithm combines a blocking strategy, as in the Schur--Parlett scheme, and an evaluation procedure for the diagonal blocks. We discuss two implementations of the latter. The first is a natural choice based on Taylor expansions, whereas the second is derivative-free and relies on a multiprecision perturb-and-diagonalize approach. In particular, the appropriate use of multiprecision guarantees backward stability without affecting the efficiency in the generic case. This makes the second approach more robust. The whole method has cubic complexity, and it is closely related to the well-known Bartels--Stewart algorithm for Sylvester matrix equations when applied to $f(x,y)=\frac{1}{x+y}$. We validate the performances of the proposed numerical method on several problems with different conditioning properties.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-04-19T07:00:00Z
DOI: 10.1137/21M1407872
Issue No: Vol. 43, No. 2 (2022)

• A Block Bidiagonalization Method for Fixed-Accuracy Low-Rank Matrix
Approximation

• Free pre-print version: Loading...

Authors: Eric Hallman
Pages: 661 - 680
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 661-680, June 2022.
We present randUBV, a randomized algorithm for matrix sketching based on the block Lanzcos bidiagonalization process. Given a matrix $\mathbb{A}$, it produces a low-rank approximation of the form $\mathbb{UBV}^T$, where $\mathbb{U}$ and $\mathbb{V}$ have orthonormal columns in exact arithmetic and $\mathbb{B}$ is block bidiagonal. In finite precision, the columns of both $\mathbb{U}$ and $\mathbb{V}$ will be close to orthonormal. Our algorithm is closely related to the randQB algorithms of Yu, Gu, and Li [SIAM J. Matrix Anal. Appl., 39 (2018), pp. 1339--1359]. in that the entries of $\mathbb{B}$ are incrementally generated and the Frobenius norm approximation error may be efficiently estimated. It is therefore suitable for the fixed-accuracy problem and so is designed to terminate as soon as a user input error tolerance is reached. Numerical experiments suggest that the block Lanczos method is generally competitive with or superior to algorithms that use power iteration, even when $\mathbb{A}$ has significant clusters of singular values.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-04-27T07:00:00Z
DOI: 10.1137/21M1397866
Issue No: Vol. 43, No. 2 (2022)

• Deflation for the Symmetric Arrowhead and Diagonal-Plus-Rank-One
Eigenvalue Problems

• Free pre-print version: Loading...

Authors: Jesse L. Barlow, Stanley C. Eisenstat, Nevena Jakovčević Stor, Ivan Slapnicar
Pages: 681 - 709
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 681-709, June 2022.
We discuss the eigenproblem for the symmetric arrowhead matrix $C = (\begin{smallmatrix} D \& {z} {z}^T & \alpha \end{smallmatrix})$, where $D \in \mathbb{R}^{n \times n}$ is diagonal, ${z} \in \mathbb{R}^n$, and $\alpha \in \mathbb{R}$, in order to examine criteria for when components of ${z}$ may be set to zero. We show that whenever two eigenvalues of $C$ are sufficiently close, some component of ${z}$ may be deflated to zero, without significantly perturbing the eigenvalues of $C$, by either substituting zero for that component or performing a Givens rotation on each side of $C$. The strategy for this deflation requires ${\mathcal{O}(n^2)}$ comparisons. Although it is too costly for many applications, when we use it as a benchmark, we can analyze the effectiveness of ${{O}(n)}$ heuristics that are more practical approaches to deflation. We show that one such ${\mathcal{O}(n)}$ heuristic finds all sets of three or more nearby eigenvalues, misses sets of two or more nearby eigenvalues under limited circumstances, and produces a reduced matrix whose eigenvalues are distinct in double the working precision. Using the ${\mathcal{O}(n)}$ heuristic, we develop a more aggressive method for finding converged eigenvalues in the symmetric Lanczos algorithm. It is shown that except for pathological exceptions, the ${\mathcal{O}(n)}$ heuristic finds nearly as much deflation as the ${\mathcal{O}(n^2)}$ algorithm that reduces an arrowhead matrix to one that cannot be deflated further. The deflation algorithms and their analysis are extended to the symmetric diagonal-plus-rank-one eigenvalue problem and lead to a better deflation strategy for the LAPACK routine dstedc.f.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-05-02T07:00:00Z
DOI: 10.1137/21M139205X
Issue No: Vol. 43, No. 2 (2022)

• A Block Minimum Residual Norm Subspace Solver with Partial Convergence
Management for Sequences of Linear Systems

• Free pre-print version: Loading...

Authors: Luc Giraud, Yan-Fei Jing, Yanfei Xiang
Pages: 710 - 739
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 710-739, June 2022.
We are concerned with the iterative solution of linear systems with multiple right-hand sides available one group after another with possibly slowly varying left-hand sides. For such sequences of linear systems, we first develop a new block minimum norm residual approach that combines two main ingredients. The first component exploits ideas from GCRO-DR [Parks et al., SIAM J. Sci. Comput., 28 (2006), pp. 1651--1674], enabling us to recycle information from one solve to the next. The second component is the numerical mechanism for managing the partial convergence of the right-hand sides, referred to as inexact breakdown detection in IB-BGMRES [Robbé and Sadkane, Linear Algebra Appl., 419 (2006), pp. 265--285], that enables the monitoring of the rank deficiency in the residual space basis expanded blockwise. Next, for the class of block minimum norm residual approaches that relies on a block Arnoldi-like equality between the search space and the residual space (e.g., any block GMRES or block GCRO variants), we introduce new search space expansion policies defined on novel criteria to detect the partial convergence. These novel detection criteria are tuned to the selected stopping criterion and targeted convergence threshold to best cope with the selected normwise backward error stopping criterion, enabling us to monitor the computational effort while ensuring the final accuracy of each individual solution. Numerical experiments are reported to illustrate the numerical and computational features of both the new block Krylov solvers and the new search space block expansion polices.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-05-03T07:00:00Z
DOI: 10.1137/21M1401127
Issue No: Vol. 43, No. 2 (2022)

• Estimation of Structured Distances to Singularity for Matrix Pencils with
Symmetry Structures: A Linear Algebra--Based Approach

• Free pre-print version: Loading...

Authors: Anshul Prajapati, Punit Sharma
Pages: 740 - 763
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 740-763, June 2022.
We study the structured distance to singularity for a given regular matrix pencil $A+sE$, where $(A,E)\in \mathbb S \subseteq (\mathbb C^{n,n})^2$. This includes Hermitian, skew-Hermitian, $*$-even, $*$-odd, $*$-palindromic, T-palindromic, and dissipative Hamiltonian pencils. We present a purely linear algebra-based approach to derive explicit computable formulas for the distance to the nearest structured pencil $(A-\Delta_A)+s(E-\Delta_E)$ such that $A-\Delta_A$ and $E-\Delta_E$ have a common null vector. We then obtain a family of computable lower bounds for the unstructured and structured distances to singularity. Numerical experiments suggest that in many cases, there is a significant difference between structured and unstructured distances. This approach extends to structured matrix polynomials with higher degrees.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-05-05T07:00:00Z
DOI: 10.1137/21M1423269
Issue No: Vol. 43, No. 2 (2022)

• Linearizable Eigenvector Nonlinearities

• Free pre-print version: Loading...

Authors: Rob Claes, Elias Jarlebring, Karl Meerbergen, Parikshit Upadhyaya
Pages: 764 - 786
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 764-786, June 2022.
We present a method to linearize, without approximation, a specific class of eigenvalue problems with eigenvector nonlinearities (NEPv), where the nonlinearities are expressed by scalar functions that are defined by a quotient of linear functions of the eigenvector. The exact linearization relies on an equivalent multiparameter eigenvalue problem (MEP) that contains the exact solutions of the NEPv. Due to the characterization of MEPs in terms of a generalized eigenvalue problem this provides a direct way to compute all NEPv solutions for small problems, and it opens up the possibility to develop locally convergent iterative methods for larger problems. Moreover, the linear formulation allows us to easily determine the number of solutions of the NEPv. We propose two numerical schemes that exploit the structure of the linearization: inverse iteration and residual inverse iteration. We show how symmetry in the MEP can be used to improve reliability and reduce computational cost of both methods. Two numerical examples verify the theoretical results, and a third example shows the potential of a hybrid scheme that is based on a combination of the two proposed methods.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-05-10T07:00:00Z
DOI: 10.1137/21M142931X
Issue No: Vol. 43, No. 2 (2022)

• Error Bounds for Lanczos-Based Matrix Function Approximation

• Free pre-print version: Loading...

Authors: Tyler Chen, Anne Greenbaum, Cameron Musco, Christopher Musco
Pages: 787 - 811
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 2, Page 787-811, June 2022.
We analyze the Lanczos method for matrix function approximation (Lanczos-FA), an iterative algorithm for computing ( f(A) b) when (A) is a Hermitian matrix and (b) is a given vector. Assuming that ( f : \mathbbC \rightarrow \mathbbC) is piecewise analytic, we give a framework, based on the Cauchy integral formula, which can be used to derive a priori and a posteriori error bounds for Lanczos-FA in terms of the error of Lanczos used to solve linear systems. Unlike many error bounds for Lanczos-FA, these bounds account for fine-grained properties of the spectrum of (A), such as clustered or isolated eigenvalues. Our results are derived assuming exact arithmetic, but we show that they are easily extended to finite precision computations using existing theory about the Lanczos algorithm in finite precision. We also provide generalized bounds for the Lanczos method used to approximate quadratic forms (b^H f(A) b) and demonstrate the effectiveness of our bounds with numerical experiments.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-05-12T07:00:00Z
DOI: 10.1137/21M1427784
Issue No: Vol. 43, No. 2 (2022)

• Input-Tailored System-Theoretic Model Order Reduction for
Quadratic-Bilinear Systems

• Free pre-print version: Loading...

Authors: Björn Liljegren-Sailer, Nicole Marheineke
Pages: 1 - 39
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 1-39, January 2022.
In this paper we suggest a moment matching method for quadratic-bilinear dynamical systems. Most system-theoretic reduction methods for nonlinear systems rely on multivariate frequency representations. Our approach instead uses univariate frequency representations tailored toward user-predefined families of inputs. Then moment matching corresponds to a one-dimensional interpolation problem, not to multidimensional interpolation as for the multivariate approaches, i.e., it also involves fewer interpolation frequencies to be chosen. Compared to former contributions toward nonlinear model reduction with univariate frequency representations, our approach shows profound differences: Our derivation is more rigorous and general and reveals additional tensor-structured approximation conditions, which should be incorporated. Moreover, the proposed implementation exploits the inherent low-rank tensor structure, which enhances its efficiency. In addition, our approach allows for the incorporation of more general input relations in the state equations---not only affine-linear ones as in existing system-theoretic methods---in an elegant way. As a byproduct of the latter, also a novel modification for the multivariate methods falls off, which is able to handle more general input-relations.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-01-06T08:00:00Z
DOI: 10.1137/18M1216699
Issue No: Vol. 43, No. 1 (2022)

• Approximate Generalized Inverses with Iterative Refinement for
$\epsilon$-Accurate Preconditioning of Singular Systems

• Free pre-print version: Loading...

Authors: Xiangmin Jiao, Qiao Chen
Pages: 40 - 67
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 40-67, January 2022.
We introduce a new class of preconditioners to enable flexible GMRES to find a least-squares solution, and potentially the pseudoinverse solution, of large-scale sparse, asymmetric, singular, and potentially inconsistent systems. We develop the preconditioners based on a new observation that generalized inverses (i.e., ${A}^{g}\in\{{G}\mid{A}{G}{A}={A}\}$) enable the preconditioned Krylov subspaces to converge in a single step. We then compute an approximate generalized inverse (AGI) efficiently using a hybrid incomplete factorization (HIF), which combines multilevel incomplete LU with rank-revealing QR on its final Schur complement. We define the criteria of $\epsilon$-accuracy and stability of AGI to guarantee the convergence of preconditioned GMRES for consistent systems. For inconsistent systems, we fortify HIF with iterative refinement to obtain HIFIR, which allows accurate computations of the null-space vectors. By combining the two techniques, we then obtain a new solver, called PIPIT, for obtaining the pseudoinverse solutions for systems with low-dimensional null spaces. We demonstrate the robustness of HIF and HIFIR and show that they improve both accuracy and efficiency of the prior state of the art by orders of magnitude for systems with up to a million unknowns.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-01-10T08:00:00Z
DOI: 10.1137/20M1364126
Issue No: Vol. 43, No. 1 (2022)

• Twice Is Enough for Dangerous Eigenvalues

• Free pre-print version: Loading...

Authors: Andrew Horning, Yuji Nakatsukasa
Pages: 68 - 93
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 68-93, January 2022.
We analyze the stability of a class of eigensolvers that target interior eigenvalues with rational filters. We show that subspace iteration with a rational filter is robust even when an eigenvalue is near a filter's pole. These dangerous eigenvalues contribute to large round-off errors in the first iteration but are self-correcting in later iterations. For matrices with orthogonal eigenvectors (e.g., real-symmetric or complex Hermitian), two iterations are enough to reduce round-off errors to the order of the unit round-off. In contrast, Krylov methods accelerated by rational filters with fixed poles typically fail to converge to unit round-off accuracy when an eigenvalue is close to a pole. In the context of Arnoldi with shift-and-invert enhancement, we demonstrate a simple restart strategy that recovers full precision in the target eigenpairs.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-01-11T08:00:00Z
DOI: 10.1137/20M1385330
Issue No: Vol. 43, No. 1 (2022)

• Hierarchical Orthogonal Factorization: Sparse Square Matrices

• Free pre-print version: Loading...

Authors: Abeynaya Gnanasekaran, Eric Darve
Pages: 94 - 123
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 94-123, January 2022.
In this work, we develop a new fast algorithm, spaQR---sparsified QR---for solving large, sparse linear systems. The key to our approach lies in using low-rank approximations to sparsify the separators in a nested dissection based Householder QR factorization. First, a modified version of nested dissection is used to identify vertex separators and reorder the matrix. Then, classical Householder QR is used to factorize the separators, going from the leaves to the top of the elimination tree. After every level of separator factorization, we sparsify all the remaining separators by using low-rank approximations. This operation reduces the size of the separators without introducing any fill-in in the matrix. However, it introduces a small approximation error which can be controlled by the user. The resulting approximate factorization is stored as a sequence of sparse orthogonal and sparse upper-triangular factors. Hence, it can be applied efficiently to solve linear systems. We further improve the algorithm by using a block diagonal scaling. Then, we show a systematic analysis of the approximation error and effectiveness of the algorithm in solving linear systems. Finally, we perform numerical tests on benchmark unsymmetric problems to evaluate the performance of the algorithm. The factorization time scales as $\mathcal{O}(N \log N)$ and the solve time scales as $\mathcal{O}(N)$.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-01-24T08:00:00Z
DOI: 10.1137/20M1373475
Issue No: Vol. 43, No. 1 (2022)

• Multidimensional Total Least Squares Problem with Linear Equality
Constraints

• Free pre-print version: Loading...

Authors: Qiaohua Liu, Zhigang Jia, Yimin Wei
Pages: 124 - 150
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 124-150, January 2022.
Many recent data analysis models are mathematically characterized by a multidimensional total least squares problem with linear equality constraints (TLSE). In this paper, an explicit solution is firstly derived for the multidimensional TLSE problem, as well as the solvability conditions. With applying the perturbation theory of invariant subspace, the multidimensional TLSE problem is proved equivalent to a multidimensional unconstrained weighed total least squares problem in the limit sense. The Kronecker product-based formulae are also given for the normwise, mixed, and componentwise condition numbers of the multidimensional TLSE solution of minimum Frobenius norm, and their computable upper bounds are also provided to reduce the storage and computational cost. All these results are appropriate for the single right-hand-side case and the multidimensional total least squares problem, which are two especial cases of the multidimensional TLSE problem. In numerical experiments, the multidimensional TLSE model is successfully applied to color image deblurring and denoising for the first time, and the numerical results also indicate the effectiveness of the condition numbers.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-01-27T08:00:00Z
DOI: 10.1137/21M1400420
Issue No: Vol. 43, No. 1 (2022)

• Divide-and-Conquer Methods for Functions of Matrices with Banded or
Hierarchical Low-Rank Structure

• Free pre-print version: Loading...

Authors: Alice Cortinovis, Daniel Kressner, Stefano Massei
Pages: 151 - 177
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 151-177, January 2022.
This work is concerned with approximating matrix functions for banded matrices, hierarchically semiseparable matrices, and related structures. We develop a new divide-and-conquer method based on (rational) Krylov subspace methods for performing low-rank updates of matrix functions. Our convergence analysis of the newly proposed method proceeds by establishing relations to best polynomial and rational approximation. When only the trace or the diagonal of the matrix function is of interest, we demonstrate---in practice and in theory---that convergence can be faster. For the special case of a banded matrix, we show that the divide-and-conquer method reduces to a much simpler algorithm, which proceeds by computing matrix functions of small submatrices. Numerical experiments confirm the effectiveness of the newly developed algorithms for computing large-scale matrix functions from a wide variety of applications.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-01-27T08:00:00Z
DOI: 10.1137/21M1432594
Issue No: Vol. 43, No. 1 (2022)

• Fast Approximation of the $p$-Radius, Matrix Pressure, or Generalized
Lyapunov Exponent for Positive and Dominated Matrices

• Free pre-print version: Loading...

Authors: Ian D. Morris
Pages: 178 - 198
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 178-198, March 2022.
If $A_1,\ldots,A_N$ are real $d \times d$ matrices, then the $p$-radius, generalized Lyapunov exponent, or matrix pressure is defined to be the asymptotic exponential growth rate of the sum $\sum_{i_1,\ldots,i_n=1}^N \ A_{i_n}\cdots A_{i_1}\ ^p$, where $p$ is a real parameter. Under its various names this quantity has been investigated for its applications to topics including wavelet regularity and refinement equations, fractal geometry, and the large deviations theory of random matrix products. In this article we present a new algorithm for computing the $p$-radius under the hypothesis that the matrices are all positive (or more generally under the hypothesis that they satisfy a weaker condition called domination) and of very low dimension. This algorithm is based on interpreting the $p$-radius as the leading eigenvalue of a trace-class operator on a Hilbert space and estimating that eigenvalue via approximations to the Fredholm determinant of the operator. In this respect our method is closely related to the work of Z.-Q. Bai and M. Pollicott on computing the top Lyapunov exponent of a random matrix product. For pairs of positive matrices of dimension two our method yields substantial improvements over existing methods.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-02-10T08:00:00Z
DOI: 10.1137/19M1303964
Issue No: Vol. 43, No. 1 (2022)

• Multiply Accelerated Value Iteration for NonSymmetric Affine Fixed Point
Problems and Application to Markov Decision Processes

• Free pre-print version: Loading...

Authors: Marianne Akian, Stéphane Gaubert, Zheng Qu, Omar Saadi
Pages: 199 - 232
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 199-232, March 2022.
We analyze a modified version of the Nesterov accelerated gradient algorithm, which applies to affine fixed point problems with non-self-adjoint matrices, such as the ones appearing in the theory of Markov decision processes with discounted or mean payoff criteria. We characterize the spectra of matrices for which this algorithm does converge with an accelerated asymptotic rate. We also introduce a $d$th-order algorithm and show that it yields a multiply accelerated rate under more demanding conditions on the spectrum. We subsequently apply these methods to develop accelerated schemes for nonlinear fixed point problems arising from Markov decision processes. This is illustrated by numerical experiments.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-02-15T08:00:00Z
DOI: 10.1137/20M1367192
Issue No: Vol. 43, No. 1 (2022)

• Arbitrary Precision Algorithms for Computing the Matrix Cosine and its
Fréchet Derivative

• Free pre-print version: Loading...

Authors: Awad H. Al-Mohy, Nicholas J. Higham, Xiaobo Liu
Pages: 233 - 256
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 233-256, March 2022.
Existing algorithms for computing the matrix cosine are tightly coupled to a specific precision of floating-point arithmetic for optimal efficiency so they do not conveniently extend to an arbitrary precision environment. We develop an algorithm for computing the matrix cosine that takes the unit roundoff of the working precision as input, and so works in an arbitrary precision. The algorithm employs a Taylor approximation with scaling and recovering and it can be used with a Schur decomposition or in a decomposition-free manner. We also derive a framework for computing the Fréchet derivative, construct an efficient evaluation scheme for computing the cosine and its Fréchet derivative simultaneously in arbitrary precision, and show how this scheme can be extended to compute the matrix sine, cosine, and their Fréchet derivatives all together. Numerical experiments show that the new algorithms behave in a forward stable way over a wide range of precisions. The transformation-free version of the algorithm for computing the cosine is competitive in accuracy with the state-of-the-art algorithms in double precision and surpasses existing alternatives in both speed and accuracy in working precisions higher than double.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-02-17T08:00:00Z
DOI: 10.1137/21M1441043
Issue No: Vol. 43, No. 1 (2022)

• A Sampling Algorithm to Compute the Set of Feasible Solutions for
NonNegative Matrix Factorization with an Arbitrary Rank

• Free pre-print version: Loading...

Authors: Ragnhild Laursen, Asger Hobolth
Pages: 257 - 273
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 257-273, March 2022.
Nonnegative matrix factorization (NMF) is a useful method to extract features from multivariate data, but an important and sometimes neglected concern is that NMF can result in nonunique solutions. Often, there exist a set of feasible solutions (SFS), which makes it more difficult to interpret the factorization. This problem is especially ignored in cancer genomics, where NMF is used to infer information about the mutational processes present in the evolution of cancer. In this paper the extent of nonuniqueness is investigated for two mutational counts data, and a new sampling algorithm that can find the SFS is introduced. Our sampling algorithm is easy to implement and applies to an arbitrary rank of NMF. This is in contrast to state of the art, where the NMF rank must be smaller than or equal to four. For lower ranks we show that our algorithm performs similar to the polygon inflation algorithm that is developed in relation to chemometrics. Furthermore, we show how the size of the SFS can have a high influence on the appearing variability of a solution. Our sampling algorithm is implemented in the R package SFS (https://github.com/ragnhildlaursen/SFS).
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-02-22T08:00:00Z
DOI: 10.1137/20M1378971
Issue No: Vol. 43, No. 1 (2022)

• A Recursive Eigenspace Computation for the Canonical Polyadic
Decomposition

• Free pre-print version: Loading...

Authors: Eric Evert, Michiel Vandecappelle, Lieven De Lathauwer
Pages: 274 - 300
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 274-300, March 2022.
The canonical polyadic decomposition (CPD) is a compact decomposition which expresses a tensor as a sum of its rank-1 components. A common step in the computation of a CPD is computing a generalized eigenvalue decomposition (GEVD) of the tensor. A GEVD provides an algebraic approximation of the CPD which can then be used as an initialization in optimization routines. While in the noiseless setting GEVD exactly recovers the CPD, it has recently been shown that pencil-based computations such as GEVD are not stable. In this article we present an algebraic method for approximation of a CPD which greatly improves the accuracy of GEVD. Our method is still fundamentally pencil based; however, rather than using a single pencil and computing all of its generalized eigenvectors, we use many different pencils and in each pencil compute generalized eigenspaces corresponding to sufficiently well-separated generalized eigenvalues. The resulting “generalized eigenspace decomposition" is significantly more robust to noise than the classical GEVD. Accuracy of the generalized eigenspace decomposition is examined both empirically and theoretically. In particular, we provide a deterministic perturbation theoretic bound which is predictive of error in the computed factorization.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-02-24T08:00:00Z
DOI: 10.1137/21M1423026
Issue No: Vol. 43, No. 1 (2022)

• Sharp Estimation of Convergence Rate for Self-Consistent Field Iteration
to Solve Eigenvector-Dependent Nonlinear Eigenvalue Problems

• Free pre-print version: Loading...

Authors: Zhaojun Bai, Ren-Cang Li, Ding Lu
Pages: 301 - 327
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 301-327, March 2022.
We present a comprehensive convergence analysis for the self-consistent field (SCF) iteration to solve a class of nonlinear eigenvalue problems with eigenvector dependency (NEPvs). Using the tangent-angle matrix as an intermediate measure for approximation error, we establish new formulas for two fundamental quantities that characterize the local convergence behavior of the plain SCF: the local contraction factor and the local asymptotic average contraction factor. In comparison with previously established results, new convergence rate estimates provide much sharper bounds on the convergence speed. As an application, we extend the convergence analysis to a popular SCF variant---the level-shifted SCF. The effectiveness of the convergence rate estimates is demonstrated numerically for NEPvs arising from solving the Kohn--Sham equation in electronic structure calculation and the Gross--Pitaevskii equation for modeling of the Bose--Einstein condensation.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-02-28T08:00:00Z
DOI: 10.1137/20M136606X
Issue No: Vol. 43, No. 1 (2022)

• Guarantees for Existence of a Best Canonical Polyadic Approximation of a
Noisy Low-Rank Tensor

• Free pre-print version: Loading...

Authors: Eric Evert, Lieven De Lathauwer
Pages: 328 - 369
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 328-369, March 2022.
The canonical polyadic decomposition (CPD) of a low-rank tensor plays a major role in data analysis and signal processing by allowing for unique recovery of underlying factors. However, it is well known that the low-rank CPD approximation problem is ill-posed. That is, a tensor may fail to have a best rank $R$ CPD approximation when $R>1$. This article gives deterministic bounds for the existence of best low-rank tensor approximations over ${\mathbb{K}}={\mathbb{R}}$ or ${\mathbb{K}}={\mathbb{C}}$. More precisely, given a tensor ${\mathcal T} \in {\mathbb{K}}^{I \times I \times I}$ of rank $R \leq I$, we compute the radius of a Frobenius norm ball centered at ${\mathcal T}$ in which best ${\mathbb{K}}$-rank $R$ approximations are guaranteed to exist. In addition we show that every ${\mathbb{K}}$-rank $R$ tensor inside of this ball has a unique canonical polyadic decomposition. This neighborhood may be interpreted as a neighborhood of “mathematical truth" in which CPD approximation and computation are well-posed. In pursuit of these bounds, we describe low-rank tensor decomposition as a “joint generalized eigenvalue" problem. Using this framework, we show that, under mild assumptions, a low-rank tensor which has rank strictly greater than border rank is defective in the sense of algebraic and geometric multiplicities for joint generalized eigenvalues. Bounds for existence of best low-rank approximations are then obtained by establishing perturbation theoretic results for the joint generalized eigenvalue problem. In this way we establish a connection between existence of best low-rank approximations and the tensor spectral norm. In addition we solve a “tensor Procrustes problem" which examines orthogonal compressions for pairs of tensors. The main results of the article are illustrated by a variety of numerical experiments.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-03-03T08:00:00Z
DOI: 10.1137/20M1381046
Issue No: Vol. 43, No. 1 (2022)

• A New Lower Bound on the Size of the Smallest Vertex Separator of a Graph

• Free pre-print version: Loading...

Authors: Yongyan Guo, Gang Wu
Pages: 370 - 376
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 370-376, March 2022.
Separator minimization is an important problem in graph partitioning. Although finding an optimum partitioning for a given graph is NP-hard, estimating the size of the smallest vertex separator is an interesting problem since it can be used to assess the quality of a vertex separator. In [A. Pothen, H. Simon, and K. Liou, SIAM J. Matrix Anal. Appl., 11 (1990), pp. 430--452], two classical lower bounds on the size of the smallest vertex separator of a graph were established. In the present work, we revisit this problem and establish a new and easily computable lower bound on the smallest vertex separator.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-03-07T08:00:00Z
DOI: 10.1137/20M1382118
Issue No: Vol. 43, No. 1 (2022)

• Singularly Perturbed Markov Modulated Fluid Queues

• Free pre-print version: Loading...

Authors: Sarah Dendievel, Guy Latouche, Yuanyuan Liu, Yingchun Tang
Pages: 377 - 404
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 377-404, March 2022.
We consider a Markov modulated fluid queue for which the environment is nearly completely decomposable. Under the basic assumption that both the nearly completely decomposable Markov modulated fluid model and the unperturbed fluid models are positive recurrent, we show that the stationary density of the level can be expanded as convergent power series of the aggregated stationary densities. We go further in the analysis by assuming that one or more of the unperturbed fluid queues is not necessarily positive recurrent. We provide numerical illustration.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-03-07T08:00:00Z
DOI: 10.1137/21M1395387
Issue No: Vol. 43, No. 1 (2022)

• A Symbol-Based Analysis for Multigrid Methods for Block-Circulant and
Block-Toeplitz Systems

• Free pre-print version: Loading...

Authors: Matthias Bolten, Marco Donatelli, Paola Ferrari, Isabella Furci
Pages: 405 - 438
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 405-438, March 2022.
In the literature, there exist several studies on symbol-based multigrid methods for the solution of linear systems having structured coefficient matrices. In particular, the convergence analysis for such methods has been obtained in an elegant form in the case of Toeplitz matrices generated by a scalar-valued function. In the block-Toeplitz setting, that is, in the case where the matrix entries are small generic matrices instead of scalars, some algorithms have already been proposed regarding specific applications, and a first rigorous convergence analysis has been performed in [M. Donatelli et al., Numer. Linear Algebra Appl., 28 (2021), e2356]. However, with the existent symbol-based theoretical tools, it is still not possible to prove the convergence of many multigrid methods known in the literature. This paper aims to generalize the previous results, giving more general sufficient conditions on the symbol of the grid transfer operators. In particular, we treat matrix-valued trigonometric polynomials which can be nondiagonalizable and singular at all points, and we express the new conditions in terms of the eigenvectors associated with the ill-conditioned subspace. Moreover, we extend the analysis to the V-cycle method, proving a linear convergence rate under stronger conditions, which resemble those given in the scalar case. In order to validate our theoretical findings, we present a classical block structured problem stemming from an FEM approximation of a second order differential problem. We focus on two multigrid strategies that use the geometric and the standard bisection grid transfer operators and prove that both fall into the category of projectors satisfying the proposed conditions. In addition, using a tensor product argument, we provide a strategy to construct efficient V-cycle procedures in the block multilevel setting.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-03-07T08:00:00Z
DOI: 10.1137/21M1390554
Issue No: Vol. 43, No. 1 (2022)

• Exactly Solving Sparse Rational Linear Systems via Roundoff-Error-Free
Cholesky Factorizations

• Free pre-print version: Loading...

Authors: Christopher J. Lourenco, Erick Moreno-Centeno
Pages: 439 - 463
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 439-463, March 2022.
Exactly solving sparse symmetric positive definite (SPD) linear systems is a key problem in mathematics, engineering, and computer science. This paper derives two new sparse roundoff-error-free (REF) Cholesky factorization algorithms which exactly solve sparse SPD linear systems $A {x} = {b}$, where $A \in \mathbb{Q}^{n x n}$ and ${x}, {b} \in {Q}^{n x p}$. The key properties of these factorizations are that (1) they exclusively use integer-arithmetic and (2) in the bit-complexity model, they solve the linear system $A {x} = {b}$ in time proportional to the cost of the integer-arithmetic operations. Namely, the overhead related to data structures and ancillary operations (those not strictly required to perform the factorization) is subsumed by the cost of the integer-arithmetic operations that are essential/intrinsic to the factorization. Notably, to date our algorithms are the only exact algorithm for solving SPD linear systems with this asymptotically efficient complexity bound. Computationally, we show that the novel factorizations are faster than both sparse rational-arithmetic LDL and sparse exact LU factorization. Altogether, the derived sparse REF Cholesky factorizations present a framework to solve any rational SPD linear system exactly and efficiently.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-03-14T07:00:00Z
DOI: 10.1137/20M1371592
Issue No: Vol. 43, No. 1 (2022)

• A Measure Concentration Effect for Matrices of High, Higher, and Even
Higher Dimension

• Free pre-print version: Loading...

Authors: Harry Yserentant
Pages: 464 - 478
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 464-478, March 2022.
Let n>m, and let $A$ be an (m x n)-matrix of full rank. Then obviously the estimate $\ Ax\ \leq\ A\ \ x\$ holds for the euclidean norm of $x$ and $Ax$ and the spectral norm as the assigned matrix norm. We study the sets of all $x$ for which, for fixed \$\delta
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-03-14T07:00:00Z
DOI: 10.1137/20M1376029
Issue No: Vol. 43, No. 1 (2022)

• Semi-Infinite Linear Regression and Its Applications

• Free pre-print version: Loading...

Authors: Paz Fink Shustin, Haim Avron
Pages: 479 - 511
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 479-511, March 2022.
Finite linear least squares is one of the core problems of numerical linear algebra, with countless applications across science and engineering. Consequently, there is a rich and ongoing literature on algorithms for solving linear least squares problems. In this paper, we explore a variant in which the system's matrix has one infinite dimension (i.e., it is a quasimatrix). We call such problems semi-infinite linear regression problems. As we show, the semi-infinite case arises in several applications, such as supervised learning and function approximation, and allows for novel interpretations of existing algorithms. We explore semi-infinite linear regression rigorously and algorithmically. To that end, we give a formal framework for working with quasimatrices, and generalize several algorithms designed for the finite problem to the infinite case. Finally, we suggest the use of various sampling methods for obtaining an approximate solution.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-03-15T07:00:00Z
DOI: 10.1137/21M1411950
Issue No: Vol. 43, No. 1 (2022)

• A New Analytical Framework for the Convergence of Inexact Two-Grid Methods

• Free pre-print version: Loading...

Authors: Xuefeng Xu, Chen-Song Zhang
Pages: 512 - 533
Abstract: SIAM Journal on Matrix Analysis and Applications, Volume 43, Issue 1, Page 512-533, March 2022.
Two-grid methods with exact solution of the Galerkin coarse-grid system have been well studied by the multigrid community: an elegant identity has been established to characterize the convergence factor of exact two-grid methods. In practice, however, it is often too costly to solve the Galerkin coarse-grid system exactly, especially when its size is large. Instead, without essential loss of convergence speed, one may solve the coarse-grid system approximately. In this paper, we develop a new framework for analyzing the convergence of inexact two-grid methods: two-sided bounds for the energy norm of the error propagation matrix of inexact two-grid methods are presented. In the framework, a restricted smoother involved in the identity for exact two-grid convergence is used to measure how far the actual coarse-grid matrix deviates from the Galerkin one. As an application, we establish a new and unified convergence theory for multigrid methods.
Citation: SIAM Journal on Matrix Analysis and Applications
PubDate: 2022-03-29T07:00:00Z
DOI: 10.1137/21M140448X
Issue No: Vol. 43, No. 1 (2022)

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762

Your IP address: 44.192.25.113

Home (Search)
API
About JournalTOCs
News (blog, publications)

JournalTOCs © 2009-