Society for Industrial and Applied Mathematics: SIAM Journal on Matrix Analysis and Applications: Table of Contents
Table of Contents for SIAM Journal on Matrix Analysis and Applications. List of articles from both the latest and ahead of print issues.
https://epubs.siam.org/loi/sjmael?af=R
Society for Industrial and Applied Mathematics: SIAM Journal on Matrix Analysis and Applications: Table of Contents
Society for Industrial and Applied Mathematics
enUS
SIAM Journal on Matrix Analysis and Applications
SIAM Journal on Matrix Analysis and Applications
https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg
https://epubs.siam.org/loi/sjmael?af=R

New Convergence Analysis of GMRES with Weighted Norms, Preconditioning, and Deflation, Leading to a New Deflation Space
https://epubs.siam.org/doi/abs/10.1137/23M1622398?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/4">Volume 45, Issue 4</a>, Page 17211745, December 2024. <br/> Abstract. New convergence bounds are presented for weighted, preconditioned, and deflated GMRES for the solution of large, sparse, nonHermitian linear systems. These bounds are given for the case when the Hermitian part of the coefficient matrix is positive definite, the preconditioner is Hermitian positive definite, and the weight is equal to the preconditioner. The new bounds are a novel contribution in and of themselves. In addition, they are sufficiently explicit to indicate how to choose the preconditioner and the deflation space to accelerate the convergence. One such choice of deflating space is presented, and numerical experiments illustrate the effectiveness of such space.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 4, Page 17211745, December 2024. <br/> Abstract. New convergence bounds are presented for weighted, preconditioned, and deflated GMRES for the solution of large, sparse, nonHermitian linear systems. These bounds are given for the case when the Hermitian part of the coefficient matrix is positive definite, the preconditioner is Hermitian positive definite, and the weight is equal to the preconditioner. The new bounds are a novel contribution in and of themselves. In addition, they are sufficiently explicit to indicate how to choose the preconditioner and the deflation space to accelerate the convergence. One such choice of deflating space is presented, and numerical experiments illustrate the effectiveness of such space. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
New Convergence Analysis of GMRES with Weighted Norms, Preconditioning, and Deflation, Leading to a New Deflation Space
10.1137/23M1622398
SIAM Journal on Matrix Analysis and Applications
20241001T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Nicole Spillane
Daniel B. Szyld
New Convergence Analysis of GMRES with Weighted Norms, Preconditioning, and Deflation, Leading to a New Deflation Space
45
4
1721
1745
20241231T08:00:00Z
20241231T08:00:00Z
10.1137/23M1622398
https://epubs.siam.org/doi/abs/10.1137/23M1622398?af=R
© 2024 Society for Industrial and Applied Mathematics

OneDimensional Tensor Network Recovery
https://epubs.siam.org/doi/abs/10.1137/23M159888X?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 12171244, September 2024. <br/> Abstract. We study the recovery of the underlying graphs or permutations for tensors in the tensor ring or tensor train format. Our proposed algorithms compare the matricization ranks after downsampling, whose complexity is [math] for [math]thorder tensors. We prove that our algorithms can almost surely recover the correct graph or permutation when tensor entries can be observed without noise. We further establish the robustness of our algorithms against observational noise. The theoretical results are validated by numerical experiments.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 12171244, September 2024. <br/> Abstract. We study the recovery of the underlying graphs or permutations for tensors in the tensor ring or tensor train format. Our proposed algorithms compare the matricization ranks after downsampling, whose complexity is [math] for [math]thorder tensors. We prove that our algorithms can almost surely recover the correct graph or permutation when tensor entries can be observed without noise. We further establish the robustness of our algorithms against observational noise. The theoretical results are validated by numerical experiments. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
OneDimensional Tensor Network Recovery
10.1137/23M159888X
SIAM Journal on Matrix Analysis and Applications
20240701T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Ziang Chen
Jianfeng Lu
Anru Zhang
OneDimensional Tensor Network Recovery
45
3
1217
1244
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M159888X
https://epubs.siam.org/doi/abs/10.1137/23M159888X?af=R
© 2024 Society for Industrial and Applied Mathematics

On Compatible Transfer Operators in Nonsymmetric Algebraic Multigrid
https://epubs.siam.org/doi/abs/10.1137/23M1586069?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 12451258, September 2024. <br/> Abstract. The standard goal for an effective algebraic multigrid (AMG) algorithm is to develop relaxation and coarsegrid correction schemes that attenuate complementary error modes. In the nonsymmetric setting, coarsegrid correction [math] will almost certainly be nonorthogonal (and divergent) in any known standard product, meaning [math]. This introduces a new consideration, that one wants coarsegrid correction to be as close to orthogonal as possible, in an appropriate norm. In addition, due to nonorthogonality, [math] may actually amplify certain error modes that are in the range of interpolation. Relaxation must then not only be complementary to interpolation, but also rapidly eliminate any error amplified by the nonorthogonal correction, or the algorithm may diverge. This paper develops analytic formulae on how to construct “compatible” transfer operators in nonsymmetric AMG such that [math] in some standard matrixinduced norm. Discussion is provided on different options for the norm in the nonsymmetric setting, the relation between “ideal” transfer operators in different norms, and insight into the convergence of nonsymmetric reductionbased AMG.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 12451258, September 2024. <br/> Abstract. The standard goal for an effective algebraic multigrid (AMG) algorithm is to develop relaxation and coarsegrid correction schemes that attenuate complementary error modes. In the nonsymmetric setting, coarsegrid correction [math] will almost certainly be nonorthogonal (and divergent) in any known standard product, meaning [math]. This introduces a new consideration, that one wants coarsegrid correction to be as close to orthogonal as possible, in an appropriate norm. In addition, due to nonorthogonality, [math] may actually amplify certain error modes that are in the range of interpolation. Relaxation must then not only be complementary to interpolation, but also rapidly eliminate any error amplified by the nonorthogonal correction, or the algorithm may diverge. This paper develops analytic formulae on how to construct “compatible” transfer operators in nonsymmetric AMG such that [math] in some standard matrixinduced norm. Discussion is provided on different options for the norm in the nonsymmetric setting, the relation between “ideal” transfer operators in different norms, and insight into the convergence of nonsymmetric reductionbased AMG. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
On Compatible Transfer Operators in Nonsymmetric Algebraic Multigrid
10.1137/23M1586069
SIAM Journal on Matrix Analysis and Applications
20240701T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Ben S. Southworth
Thomas A. Manteuffel
On Compatible Transfer Operators in Nonsymmetric Algebraic Multigrid
45
3
1245
1258
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1586069
https://epubs.siam.org/doi/abs/10.1137/23M1586069?af=R
© 2024 Society for Industrial and Applied Mathematics

On Adaptive Stochastic Heavy Ball Momentum for Solving Linear Systems
https://epubs.siam.org/doi/abs/10.1137/23M1575883?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 12591286, September 2024. <br/> Abstract. The stochastic heavy ball momentum (SHBM) method has gained considerable popularity as a scalable approach for solving largescale optimization problems. However, one limitation of this method is its reliance on prior knowledge of certain problem parameters, such as singular values of a matrix. In this paper, we propose an adaptive variant of the SHBM method for solving stochastic problems that are reformulated from linear systems using userdefined distributions. Our adaptive SHBM (ASHBM) method utilizes iterative information to update the parameters, addressing an open problem in the literature regarding the adaptive learning of momentum parameters. We prove that our method converges linearly in expectation, with a better convergence bound compared to the basic method. Notably, we demonstrate that the deterministic version of our ASHBM algorithm can be reformulated as a variant of the conjugate gradient (CG) method, inheriting many of its appealing properties, such as finitetime convergence. Consequently, the ASHBM method can be further generalized to develop a brandnew framework of the stochastic CG method for solving linear systems. Our theoretical results are supported by numerical experiments.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 12591286, September 2024. <br/> Abstract. The stochastic heavy ball momentum (SHBM) method has gained considerable popularity as a scalable approach for solving largescale optimization problems. However, one limitation of this method is its reliance on prior knowledge of certain problem parameters, such as singular values of a matrix. In this paper, we propose an adaptive variant of the SHBM method for solving stochastic problems that are reformulated from linear systems using userdefined distributions. Our adaptive SHBM (ASHBM) method utilizes iterative information to update the parameters, addressing an open problem in the literature regarding the adaptive learning of momentum parameters. We prove that our method converges linearly in expectation, with a better convergence bound compared to the basic method. Notably, we demonstrate that the deterministic version of our ASHBM algorithm can be reformulated as a variant of the conjugate gradient (CG) method, inheriting many of its appealing properties, such as finitetime convergence. Consequently, the ASHBM method can be further generalized to develop a brandnew framework of the stochastic CG method for solving linear systems. Our theoretical results are supported by numerical experiments. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
On Adaptive Stochastic Heavy Ball Momentum for Solving Linear Systems
10.1137/23M1575883
SIAM Journal on Matrix Analysis and Applications
20240709T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Yun Zeng
Deren Han
Yansheng Su
Jiaxin Xie
On Adaptive Stochastic Heavy Ball Momentum for Solving Linear Systems
45
3
1259
1286
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1575883
https://epubs.siam.org/doi/abs/10.1137/23M1575883?af=R
© 2024 Society for Industrial and Applied Mathematics

On the TwoParameter Matrix Pencil Problem
https://epubs.siam.org/doi/abs/10.1137/23M1545963?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 12871309, September 2024. <br/> Abstract. The multiparameter matrix pencil problem (MPP) is a generalization of the oneparameter MPP: Given a set of [math], [math] complex matrices [math] with [math], it is required to find all complex scalars [math], not all zero, such that the matrix pencil [math] loses column rank and the corresponding nonzero complex vector [math] such that [math]. We call the [math]tuple [math] an eigenvalue and the corresponding vector [math] an eigenvector. This problem is related to the wellknown multiparameter eigenvalue problem, except that there is only one pencil and, crucially, the matrices are not necessarily square. This paper uses our preliminary investigation in F. F. Alsubaie [[math] Optimal Model Reduction for Linear Dynamic Systems and the Solution of Multiparameter Matrix Pencil Problems, PhD thesis, Imperial College London, 2019], which presents a theoretical study of the multiparameter MPP and its applications in the [math] optimal model reduction problem, to give a full solution to the twoparameter MPP. First, an inflation process is implemented to show that the twoparameter MPP is equivalent to a set of three [math] simultaneous oneparameter MPPs. These problems are given in terms of Kronecker commutator operators (involving the original matrices) that exhibit several symmetries. These symmetries are analyzed and are then used to deflate the dimensions of the oneparameter MPPs to [math], thus simplifying their numerical solution. In the case in which [math], it is shown that the twoparameter MPP has at least one solution and generically [math] solutions, and furthermore that, under a rank assumption, the Kronecker determinant operators satisfy a commutativity property. This is then used to show that the twoparameter MPP is equivalent to a set of three simultaneous eigenvalue problems of dimension [math]. A general solution algorithm is presented and numerical examples are given to outline the procedure of the proposed algorithm.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 12871309, September 2024. <br/> Abstract. The multiparameter matrix pencil problem (MPP) is a generalization of the oneparameter MPP: Given a set of [math], [math] complex matrices [math] with [math], it is required to find all complex scalars [math], not all zero, such that the matrix pencil [math] loses column rank and the corresponding nonzero complex vector [math] such that [math]. We call the [math]tuple [math] an eigenvalue and the corresponding vector [math] an eigenvector. This problem is related to the wellknown multiparameter eigenvalue problem, except that there is only one pencil and, crucially, the matrices are not necessarily square. This paper uses our preliminary investigation in F. F. Alsubaie [[math] Optimal Model Reduction for Linear Dynamic Systems and the Solution of Multiparameter Matrix Pencil Problems, PhD thesis, Imperial College London, 2019], which presents a theoretical study of the multiparameter MPP and its applications in the [math] optimal model reduction problem, to give a full solution to the twoparameter MPP. First, an inflation process is implemented to show that the twoparameter MPP is equivalent to a set of three [math] simultaneous oneparameter MPPs. These problems are given in terms of Kronecker commutator operators (involving the original matrices) that exhibit several symmetries. These symmetries are analyzed and are then used to deflate the dimensions of the oneparameter MPPs to [math], thus simplifying their numerical solution. In the case in which [math], it is shown that the twoparameter MPP has at least one solution and generically [math] solutions, and furthermore that, under a rank assumption, the Kronecker determinant operators satisfy a commutativity property. This is then used to show that the twoparameter MPP is equivalent to a set of three simultaneous eigenvalue problems of dimension [math]. A general solution algorithm is presented and numerical examples are given to outline the procedure of the proposed algorithm. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
On the TwoParameter Matrix Pencil Problem
10.1137/23M1545963
SIAM Journal on Matrix Analysis and Applications
20240711T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Satin K. Gungah
Fawwaz F. Alsubaie
Imad M. Jaimoukha
On the TwoParameter Matrix Pencil Problem
45
3
1287
1309
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1545963
https://epubs.siam.org/doi/abs/10.1137/23M1545963?af=R
© 2024 Society for Industrial and Applied Mathematics

Decomposition of a Tensor into Multilinear Rank[math] Terms
https://epubs.siam.org/doi/abs/10.1137/23M1557246?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 13101334, September 2024. <br/> Abstract. We present new generic and deterministic uniqueness results for block term decompositions (BTDs). These uniqueness conditions hold under mild assumptions and apply to more general settings than previously known results. We also present an algebraic algorithm for the computation of BTDs. Our algorithm requires no knowledge of the block sizes appearing in the BTD: these block sizes are recovered from the algorithm. Through numerical simulations, we illustrate that, in contrast to competing optimizationbased methods, even in noisy settings our algebraic algorithm can successfully recover an underlying BTD without knowledge of block sizes provided the signaltonoise ratio is sufficiently high. We observe that the algorithm can significantly improve one’s ability to successfully recover a BTD when it is used as an algebraic initialization for leading optimization routines. Moreover, only a few optimization iterations are required to successfully converge to the BTD from the algebraic solution.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 13101334, September 2024. <br/> Abstract. We present new generic and deterministic uniqueness results for block term decompositions (BTDs). These uniqueness conditions hold under mild assumptions and apply to more general settings than previously known results. We also present an algebraic algorithm for the computation of BTDs. Our algorithm requires no knowledge of the block sizes appearing in the BTD: these block sizes are recovered from the algorithm. Through numerical simulations, we illustrate that, in contrast to competing optimizationbased methods, even in noisy settings our algebraic algorithm can successfully recover an underlying BTD without knowledge of block sizes provided the signaltonoise ratio is sufficiently high. We observe that the algorithm can significantly improve one’s ability to successfully recover a BTD when it is used as an algebraic initialization for leading optimization routines. Moreover, only a few optimization iterations are required to successfully converge to the BTD from the algebraic solution. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
Decomposition of a Tensor into Multilinear Rank[math] Terms
10.1137/23M1557246
SIAM Journal on Matrix Analysis and Applications
20240715T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Ignat Domanov
Nico Vervliet
Eric Evert
Lieven De Lathauwer
Decomposition of a Tensor into Multilinear Rank[math] Terms
45
3
1310
1334
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1557246
https://epubs.siam.org/doi/abs/10.1137/23M1557246?af=R
© 2024 Society for Industrial and Applied Mathematics

Eigenstructure Perturbations for a Class of Hamiltonian Matrices and Solutions of Related Riccati Inequalities
https://epubs.siam.org/doi/abs/10.1137/23M1619563?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 13351360, September 2024. <br/> Abstract. The characterization of the solution set for a class of algebraic Riccati inequalities is studied. This class arises in the passivity analysis of linear timeinvariant control systems. Eigenvalue perturbation theory for the Hamiltonian matrix associated with the Riccati inequality is used to analyze the extremal points of the solution set.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 13351360, September 2024. <br/> Abstract. The characterization of the solution set for a class of algebraic Riccati inequalities is studied. This class arises in the passivity analysis of linear timeinvariant control systems. Eigenvalue perturbation theory for the Hamiltonian matrix associated with the Riccati inequality is used to analyze the extremal points of the solution set. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
Eigenstructure Perturbations for a Class of Hamiltonian Matrices and Solutions of Related Riccati Inequalities
10.1137/23M1619563
SIAM Journal on Matrix Analysis and Applications
20240718T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Volker Mehrmann
Hongguo Xu
Eigenstructure Perturbations for a Class of Hamiltonian Matrices and Solutions of Related Riccati Inequalities
45
3
1335
1360
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1619563
https://epubs.siam.org/doi/abs/10.1137/23M1619563?af=R
© 2024 Society for Industrial and Applied Mathematics

SinglePass Nyström Approximation in Mixed Precision
https://epubs.siam.org/doi/abs/10.1137/22M154079X?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 13611391, September 2024. <br/> Abstract. Lowrank matrix approximations appear in a number of scientific computing applications. We consider the Nyström method for approximating a positive semidefinite matrix [math]. In the case that [math] is very large or its entries can only be accessed once, a singlepass version may be necessary. In this work, we perform a complete rounding error analysis of the singlepass Nyström method in two precisions, where the computation of the expensive matrix product with [math] is assumed to be performed in the lower of the two precisions. Our analysis gives insight into how the sketching matrix and shift should be chosen to ensure stability, implementation aspects which have been commented on in the literature but not yet rigorously justified. We further develop a heuristic to determine how to pick the lower precision, which confirms the general intuition that the lower the desired rank of the approximation, the lower the precision we can use without detriment. We also demonstrate that our mixed precision Nyström method can be used to inexpensively construct limited memory preconditioners for the conjugate gradient method and derive a bound on the condition number of the resulting preconditioned coefficient matrix. We present numerical experiments on a set of matrices with various spectral decays and demonstrate the utility of our mixed precision approach.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 13611391, September 2024. <br/> Abstract. Lowrank matrix approximations appear in a number of scientific computing applications. We consider the Nyström method for approximating a positive semidefinite matrix [math]. In the case that [math] is very large or its entries can only be accessed once, a singlepass version may be necessary. In this work, we perform a complete rounding error analysis of the singlepass Nyström method in two precisions, where the computation of the expensive matrix product with [math] is assumed to be performed in the lower of the two precisions. Our analysis gives insight into how the sketching matrix and shift should be chosen to ensure stability, implementation aspects which have been commented on in the literature but not yet rigorously justified. We further develop a heuristic to determine how to pick the lower precision, which confirms the general intuition that the lower the desired rank of the approximation, the lower the precision we can use without detriment. We also demonstrate that our mixed precision Nyström method can be used to inexpensively construct limited memory preconditioners for the conjugate gradient method and derive a bound on the condition number of the resulting preconditioned coefficient matrix. We present numerical experiments on a set of matrices with various spectral decays and demonstrate the utility of our mixed precision approach. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
SinglePass Nyström Approximation in Mixed Precision
10.1137/22M154079X
SIAM Journal on Matrix Analysis and Applications
20240719T07:00:00Z
© 2024 SIAM. Published by SIAM under the terms of the Creative Commons 4.0 license
Erin Carson
Ieva Daužickaitė
SinglePass Nyström Approximation in Mixed Precision
45
3
1361
1391
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/22M154079X
https://epubs.siam.org/doi/abs/10.1137/22M154079X?af=R
© 2024 SIAM. Published by SIAM under the terms of the Creative Commons 4.0 license

Spectral Transformation for the Dense Symmetric Semidefinite Generalized Eigenvalue Problem
https://epubs.siam.org/doi/abs/10.1137/24M162916X?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 13921413, September 2024. <br/> Abstract. The spectral transformation Lanczos method for the sparse symmetric definite generalized eigenvalue problem for matrices [math] and [math] is an iterative method that addresses the case of semidefinite or illconditioned [math] using a shifted and inverted formulation of the problem. This paper proposes the same approach for dense problems and shows that with a shift chosen in accordance with certain constraints, the algorithm can conditionally ensure that every computed shifted and inverted eigenvalue is close to the exact shifted and inverted eigenvalue of a pair of matrices close to [math] and [math]. Under the same assumptions on the shift, the analysis of the algorithm for the shifted and inverted problem leads to useful error bounds for the original problem, including a bound that shows how a single shift that is of moderate size in a scaled sense can be chosen so that every computed generalized eigenvalue corresponds to a generalized eigenvalue of a pair of matrices close to [math] and [math]. The computed generalized eigenvectors give a relative residual that depends on the distance between the corresponding generalized eigenvalue and the shift. If the shift is of moderate size, then relative residuals are small for generalized eigenvalues that are not much larger than the shift. Larger shifts give small relative residuals for generalized eigenvalues that are not much larger or smaller than the shift.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 13921413, September 2024. <br/> Abstract. The spectral transformation Lanczos method for the sparse symmetric definite generalized eigenvalue problem for matrices [math] and [math] is an iterative method that addresses the case of semidefinite or illconditioned [math] using a shifted and inverted formulation of the problem. This paper proposes the same approach for dense problems and shows that with a shift chosen in accordance with certain constraints, the algorithm can conditionally ensure that every computed shifted and inverted eigenvalue is close to the exact shifted and inverted eigenvalue of a pair of matrices close to [math] and [math]. Under the same assumptions on the shift, the analysis of the algorithm for the shifted and inverted problem leads to useful error bounds for the original problem, including a bound that shows how a single shift that is of moderate size in a scaled sense can be chosen so that every computed generalized eigenvalue corresponds to a generalized eigenvalue of a pair of matrices close to [math] and [math]. The computed generalized eigenvectors give a relative residual that depends on the distance between the corresponding generalized eigenvalue and the shift. If the shift is of moderate size, then relative residuals are small for generalized eigenvalues that are not much larger than the shift. Larger shifts give small relative residuals for generalized eigenvalues that are not much larger or smaller than the shift. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
Spectral Transformation for the Dense Symmetric Semidefinite Generalized Eigenvalue Problem
10.1137/24M162916X
SIAM Journal on Matrix Analysis and Applications
20240722T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Michael Stewart
Spectral Transformation for the Dense Symmetric Semidefinite Generalized Eigenvalue Problem
45
3
1392
1413
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/24M162916X
https://epubs.siam.org/doi/abs/10.1137/24M162916X?af=R
© 2024 Society for Industrial and Applied Mathematics

On Semidefinite Programming Characterizations of the Numerical Radius and Its Dual Norm
https://epubs.siam.org/doi/abs/10.1137/23M160356X?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 14141428, September 2024. <br/> Abstract. We state and give selfcontained proofs of semidefinite programming characterizations of the numerical radius and its dual norm for matrices. We show that the computation of the numerical radius and its dual norm within [math] precision are polynomially time computable in the data and [math] using either the ellipsoid method or the short step, primal interior point method. We apply our results to give a simple formula for the spectral and the nuclear norm of a [math] real tensor in terms of the numerical radius and its dual norm.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 14141428, September 2024. <br/> Abstract. We state and give selfcontained proofs of semidefinite programming characterizations of the numerical radius and its dual norm for matrices. We show that the computation of the numerical radius and its dual norm within [math] precision are polynomially time computable in the data and [math] using either the ellipsoid method or the short step, primal interior point method. We apply our results to give a simple formula for the spectral and the nuclear norm of a [math] real tensor in terms of the numerical radius and its dual norm. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
On Semidefinite Programming Characterizations of the Numerical Radius and Its Dual Norm
10.1137/23M160356X
SIAM Journal on Matrix Analysis and Applications
20240723T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Shmuel Friedland
ChiKwong Li
On Semidefinite Programming Characterizations of the Numerical Radius and Its Dual Norm
45
3
1414
1428
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M160356X
https://epubs.siam.org/doi/abs/10.1137/23M160356X?af=R
© 2024 Society for Industrial and Applied Mathematics

BlockDiagonalization of Quaternion Circulant Matrices with Applications
https://epubs.siam.org/doi/abs/10.1137/23M1552115?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 14291454, September 2024. <br/> Abstract. It is well known that a complex circulant matrix can be diagonalized by a discrete Fourier matrix with imaginary unit [math]. The main aim of this paper is to demonstrate that a quaternion circulant matrix cannot be diagonalized by a discrete quaternion Fourier matrix with three imaginary units [math], [math], and [math]. Instead, a quaternion circulant matrix can be blockdiagonalized into 1by1 block and 2by2 block matrices by permuted discrete quaternion Fourier transform matrix. With such a blockdiagonalized form, the inverse of a quaternion circulant matrix can be determined efficiently similarly to the inverse of a complex circulant matrix. We make use of this blockdiagonalized form to study quaternion tensor singular value decomposition of quaternion tensors where the entries are quaternion numbers. The applications, including computing the inverse of a quaternion circulant matrix and solving quaternion Toeplitz systems arising from linear prediction of quaternion signals, are employed to validate the efficiency of our proposed block diagonalized results.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 14291454, September 2024. <br/> Abstract. It is well known that a complex circulant matrix can be diagonalized by a discrete Fourier matrix with imaginary unit [math]. The main aim of this paper is to demonstrate that a quaternion circulant matrix cannot be diagonalized by a discrete quaternion Fourier matrix with three imaginary units [math], [math], and [math]. Instead, a quaternion circulant matrix can be blockdiagonalized into 1by1 block and 2by2 block matrices by permuted discrete quaternion Fourier transform matrix. With such a blockdiagonalized form, the inverse of a quaternion circulant matrix can be determined efficiently similarly to the inverse of a complex circulant matrix. We make use of this blockdiagonalized form to study quaternion tensor singular value decomposition of quaternion tensors where the entries are quaternion numbers. The applications, including computing the inverse of a quaternion circulant matrix and solving quaternion Toeplitz systems arising from linear prediction of quaternion signals, are employed to validate the efficiency of our proposed block diagonalized results. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
BlockDiagonalization of Quaternion Circulant Matrices with Applications
10.1137/23M1552115
SIAM Journal on Matrix Analysis and Applications
20240801T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Junjun Pan
Michael K. Ng
BlockDiagonalization of Quaternion Circulant Matrices with Applications
45
3
1429
1454
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1552115
https://epubs.siam.org/doi/abs/10.1137/23M1552115?af=R
© 2024 Society for Industrial and Applied Mathematics

Variational Characterization and Rayleigh Quotient Iteration of 2D Eigenvalue Problem with Applications
https://epubs.siam.org/doi/abs/10.1137/22M1472589?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 14551486, September 2024. <br/> Abstract. A two dimensional eigenvalue problem (2DEVP) of a Hermitian matrix pair [math] is introduced in this paper. The 2DEVP can be regarded as a linear algebra formulation of the wellknown eigenvalue optimization problem of the parameter matrix [math]. We first present fundamental properties of the 2DEVP, such as the existence and variational characterizations of 2Deigenvalues, and then devise a Rayleigh quotient iteration (RQI)like algorithm, 2DRQI in short, for computing a 2Deigentriplet of the 2DEVP. The efficacy of the 2DRQI is demonstrated by large scale eigenvalue optimization problems arising from the minmax of Rayleigh quotients and the distance to instability of a stable matrix.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 14551486, September 2024. <br/> Abstract. A two dimensional eigenvalue problem (2DEVP) of a Hermitian matrix pair [math] is introduced in this paper. The 2DEVP can be regarded as a linear algebra formulation of the wellknown eigenvalue optimization problem of the parameter matrix [math]. We first present fundamental properties of the 2DEVP, such as the existence and variational characterizations of 2Deigenvalues, and then devise a Rayleigh quotient iteration (RQI)like algorithm, 2DRQI in short, for computing a 2Deigentriplet of the 2DEVP. The efficacy of the 2DRQI is demonstrated by large scale eigenvalue optimization problems arising from the minmax of Rayleigh quotients and the distance to instability of a stable matrix. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
Variational Characterization and Rayleigh Quotient Iteration of 2D Eigenvalue Problem with Applications
10.1137/22M1472589
SIAM Journal on Matrix Analysis and Applications
20240806T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Tianyi Lu
Yangfeng Su
Zhaojun Bai
Variational Characterization and Rayleigh Quotient Iteration of 2D Eigenvalue Problem with Applications
45
3
1455
1486
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/22M1472589
https://epubs.siam.org/doi/abs/10.1137/22M1472589?af=R
© 2024 Society for Industrial and Applied Mathematics

Reorthogonalized Block Classical Gram–Schmidt Using Two CholeskyBased TSQR Algorithms
https://epubs.siam.org/doi/abs/10.1137/23M1605387?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 14871517, September 2024. <br/> Abstract. In [Numer. Math., 23 (2013), pp. 395–423], Barlow and Smoktunowicz propose the reorthogonalized block classical Gram–Schmidt algorithm BCGS2. New conditions for the backward stability of BCGS2 that allow the use of a more flexible version of that algorithm are given. Backward stability for BCGS2 means that, in floating point arithmetic with machine precision [math], for a full column rank [math], the algorithm produces [math] and upper triangular [math] such that [math] and [math]. However, each major step of BCGS2 requires the QR factorization of two intermediate [math] matrices [math] and [math]. In many applications of interest [math], thus these factorizations are called “tall, skinny” QR (TSQR) operations. Each such factorization was assumed to produce [math], such that [math] and [math]. For this suboperation, the first of these two conditions limits the choice of QR factorization algorithms to those, such as Householder and Givens QR, which may not produce the [math] as efficiently as some with weaker orthogonality restrictions. For the second of these QR factorizations, it is shown that the Cholesky decomposition of [math] followed by the [math] can be substituted without a significant change in the conditions for backward stability. With slightly stronger restrictions, the first QR decomposition can be done by algorithms such as the mixed precision CholQR algorithm described by Yamazaki, Tomov, and Dongarra [SIAM J. Sci. Comput., 37 (2015), pp. C307–C330]. In a GPU/CPU environment, Yamazaki, Tomov, and Dongarra showed that algorithm to be a very efficient method of producing the TSQR. Given that a common application of Gram–Schmidt algorithms is in the implementation of Krylov subspace methods, such as block GMRES, these results make the BCGS2 algorithm more broadly applicable.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 14871517, September 2024. <br/> Abstract. In [Numer. Math., 23 (2013), pp. 395–423], Barlow and Smoktunowicz propose the reorthogonalized block classical Gram–Schmidt algorithm BCGS2. New conditions for the backward stability of BCGS2 that allow the use of a more flexible version of that algorithm are given. Backward stability for BCGS2 means that, in floating point arithmetic with machine precision [math], for a full column rank [math], the algorithm produces [math] and upper triangular [math] such that [math] and [math]. However, each major step of BCGS2 requires the QR factorization of two intermediate [math] matrices [math] and [math]. In many applications of interest [math], thus these factorizations are called “tall, skinny” QR (TSQR) operations. Each such factorization was assumed to produce [math], such that [math] and [math]. For this suboperation, the first of these two conditions limits the choice of QR factorization algorithms to those, such as Householder and Givens QR, which may not produce the [math] as efficiently as some with weaker orthogonality restrictions. For the second of these QR factorizations, it is shown that the Cholesky decomposition of [math] followed by the [math] can be substituted without a significant change in the conditions for backward stability. With slightly stronger restrictions, the first QR decomposition can be done by algorithms such as the mixed precision CholQR algorithm described by Yamazaki, Tomov, and Dongarra [SIAM J. Sci. Comput., 37 (2015), pp. C307–C330]. In a GPU/CPU environment, Yamazaki, Tomov, and Dongarra showed that algorithm to be a very efficient method of producing the TSQR. Given that a common application of Gram–Schmidt algorithms is in the implementation of Krylov subspace methods, such as block GMRES, these results make the BCGS2 algorithm more broadly applicable. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
Reorthogonalized Block Classical Gram–Schmidt Using Two CholeskyBased TSQR Algorithms
10.1137/23M1605387
SIAM Journal on Matrix Analysis and Applications
20240809T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Jesse L. Barlow
Reorthogonalized Block Classical Gram–Schmidt Using Two CholeskyBased TSQR Algorithms
45
3
1487
1517
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1605387
https://epubs.siam.org/doi/abs/10.1137/23M1605387?af=R
© 2024 Society for Industrial and Applied Mathematics

Small Singular Values Can Increase in Lower Precision
https://epubs.siam.org/doi/abs/10.1137/23M1557209?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 15181540, September 2024. <br/> Abstract. We perturb a real matrix [math] of full column rank, and derive lower bounds for the smallest singular values of the perturbed matrix, in terms of normwise absolute perturbations. Our bounds, which extend existing lowerorder expressions, demonstrate the potential increase in the smallest singular values and represent a qualitative model for the increase in the small singular values after a matrix has been downcast to a lower arithmetic precision. Numerical experiments confirm the qualitative validity of this model and its ability to predict singular values changes in the presence of decreased arithmetic precision.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 15181540, September 2024. <br/> Abstract. We perturb a real matrix [math] of full column rank, and derive lower bounds for the smallest singular values of the perturbed matrix, in terms of normwise absolute perturbations. Our bounds, which extend existing lowerorder expressions, demonstrate the potential increase in the smallest singular values and represent a qualitative model for the increase in the small singular values after a matrix has been downcast to a lower arithmetic precision. Numerical experiments confirm the qualitative validity of this model and its ability to predict singular values changes in the presence of decreased arithmetic precision. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
Small Singular Values Can Increase in Lower Precision
10.1137/23M1557209
SIAM Journal on Matrix Analysis and Applications
20240812T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Christos Boutsikas
Petros Drineas
Ilse C. F. Ipsen
Small Singular Values Can Increase in Lower Precision
45
3
1518
1540
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1557209
https://epubs.siam.org/doi/abs/10.1137/23M1557209?af=R
© 2024 Society for Industrial and Applied Mathematics

Random Walks, Conductance, and Resistance for the Connection Graph Laplacian
https://epubs.siam.org/doi/abs/10.1137/23M1595400?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 15411572, September 2024. <br/> Abstract. We investigate the concept of effective resistance in connection graphs, expanding its traditional application from undirected graphs. We propose a robust definition of effective resistance in connection graphs by focusing on the duality of Dirichlettype and Poissontype problems on connection graphs. Additionally, we delve into random walks, taking into account both node transitions and vector rotations. This approach introduces novel concepts of effective conductance and resistance matrices for connection graphs, capturing mean rotation matrices corresponding to random walk transitions. Thereby, it provides new theoretical insights for network analysis and optimization.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 15411572, September 2024. <br/> Abstract. We investigate the concept of effective resistance in connection graphs, expanding its traditional application from undirected graphs. We propose a robust definition of effective resistance in connection graphs by focusing on the duality of Dirichlettype and Poissontype problems on connection graphs. Additionally, we delve into random walks, taking into account both node transitions and vector rotations. This approach introduces novel concepts of effective conductance and resistance matrices for connection graphs, capturing mean rotation matrices corresponding to random walk transitions. Thereby, it provides new theoretical insights for network analysis and optimization. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
Random Walks, Conductance, and Resistance for the Connection Graph Laplacian
10.1137/23M1595400
SIAM Journal on Matrix Analysis and Applications
20240819T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Alexander Cloninger
Gal Mishne
Andreas Oslandsbotn
Sawyer J. Robertson
Zhengchao Wan
Yusu Wang
Random Walks, Conductance, and Resistance for the Connection Graph Laplacian
45
3
1541
1572
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1595400
https://epubs.siam.org/doi/abs/10.1137/23M1595400?af=R
© 2024 Society for Industrial and Applied Mathematics

A Geometric Approach to Approximating the Limit Set of Eigenvalues for Banded Toeplitz Matrices
https://epubs.siam.org/doi/abs/10.1137/23M1587804?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 15731598, September 2024. <br/> Abstract. This article is about finding the limit set for banded Toeplitz matrices. Our main result is a new approach to approximate the limit set [math], where [math] is the symbol of the banded Toeplitz matrix. The new approach is geometrical and based on the formula [math], where [math] is a scaling factor, i.e., [math], and [math] denotes the spectrum. We show that the full intersection can be approximated by the intersection for a finite number of [math]’s and that the intersection of polygon approximations for [math] yields an approximating polygon for [math] that converges to [math] in the Hausdorff metric. Further, we show that one can slightly expand the polygon approximations for [math] to ensure that they contain [math]. Then, taking the intersection yields an approximating superset of [math] which converges to [math] in the Hausdorff metric and is guaranteed to contain [math]. Combining the established algebraic (rootfinding) method with our approximating superset, we are able to give an explicit bound on the Hausdorff distance to the true limit set. We implement the algorithm in Python and test it. It performs on par to and better in some cases than existing algorithms. We argue, but do not prove, that the average time complexity of the algorithm is [math], where [math] is the number of [math]’s and [math] is the number of vertices for the polygons approximating [math]. Further, we argue that the distance from [math] to both the approximating polygon and the approximating superset decreases as [math] for most of [math], where [math] is the number of elementary operations required by the algorithm.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 15731598, September 2024. <br/> Abstract. This article is about finding the limit set for banded Toeplitz matrices. Our main result is a new approach to approximate the limit set [math], where [math] is the symbol of the banded Toeplitz matrix. The new approach is geometrical and based on the formula [math], where [math] is a scaling factor, i.e., [math], and [math] denotes the spectrum. We show that the full intersection can be approximated by the intersection for a finite number of [math]’s and that the intersection of polygon approximations for [math] yields an approximating polygon for [math] that converges to [math] in the Hausdorff metric. Further, we show that one can slightly expand the polygon approximations for [math] to ensure that they contain [math]. Then, taking the intersection yields an approximating superset of [math] which converges to [math] in the Hausdorff metric and is guaranteed to contain [math]. Combining the established algebraic (rootfinding) method with our approximating superset, we are able to give an explicit bound on the Hausdorff distance to the true limit set. We implement the algorithm in Python and test it. It performs on par to and better in some cases than existing algorithms. We argue, but do not prove, that the average time complexity of the algorithm is [math], where [math] is the number of [math]’s and [math] is the number of vertices for the polygons approximating [math]. Further, we argue that the distance from [math] to both the approximating polygon and the approximating superset decreases as [math] for most of [math], where [math] is the number of elementary operations required by the algorithm. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
A Geometric Approach to Approximating the Limit Set of Eigenvalues for Banded Toeplitz Matrices
10.1137/23M1587804
SIAM Journal on Matrix Analysis and Applications
20240822T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Teodor Bucht
Jacob S. Christiansen
A Geometric Approach to Approximating the Limit Set of Eigenvalues for Banded Toeplitz Matrices
45
3
1573
1598
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1587804
https://epubs.siam.org/doi/abs/10.1137/23M1587804?af=R
© 2024 Society for Industrial and Applied Mathematics

Growth Factors of Orthogonal Matrices and Local Behavior of Gaussian Elimination with Partial and Complete Pivoting
https://epubs.siam.org/doi/abs/10.1137/23M1597733?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 15991620, September 2024. <br/> Abstract. Gaussian elimination (GE) is the most used dense linear solver. Error analysis of GE with selected pivoting strategies on wellconditioned systems can focus on studying the behavior of growth factors. Although exponential growth is possible with GE with partial pivoting (GEPP), growth tends to stay much smaller in practice. Support for this behavior was provided recently by Huang and Tikhomirov’s averagecase analysis of GEPP, which showed GEPP growth factors for Gaussian matrices stay at most polynomial with very high probability. GE with complete pivoting (GECP) has also seen a lot of recent interest, with improvements to both lower and upper bounds on worstcase GECP growth provided by Bisain, Edelman, and Urschel in 2023. We are interested in studying how GEPP and GECP behave on the same linear systems as well as studying large growth on particular subclasses of matrices, including orthogonal matrices. Moreover, as a means to better address the question of why large growth is rarely encountered, we further study matrices with a large difference in growth between using GEPP and GECP, and we explore how the smaller growth strategy dominates behavior in a small neighborhood of the initial matrix.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 15991620, September 2024. <br/> Abstract. Gaussian elimination (GE) is the most used dense linear solver. Error analysis of GE with selected pivoting strategies on wellconditioned systems can focus on studying the behavior of growth factors. Although exponential growth is possible with GE with partial pivoting (GEPP), growth tends to stay much smaller in practice. Support for this behavior was provided recently by Huang and Tikhomirov’s averagecase analysis of GEPP, which showed GEPP growth factors for Gaussian matrices stay at most polynomial with very high probability. GE with complete pivoting (GECP) has also seen a lot of recent interest, with improvements to both lower and upper bounds on worstcase GECP growth provided by Bisain, Edelman, and Urschel in 2023. We are interested in studying how GEPP and GECP behave on the same linear systems as well as studying large growth on particular subclasses of matrices, including orthogonal matrices. Moreover, as a means to better address the question of why large growth is rarely encountered, we further study matrices with a large difference in growth between using GEPP and GECP, and we explore how the smaller growth strategy dominates behavior in a small neighborhood of the initial matrix. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
Growth Factors of Orthogonal Matrices and Local Behavior of Gaussian Elimination with Partial and Complete Pivoting
10.1137/23M1597733
SIAM Journal on Matrix Analysis and Applications
20240822T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
John PecaMedlin
Growth Factors of Orthogonal Matrices and Local Behavior of Gaussian Elimination with Partial and Complete Pivoting
45
3
1599
1620
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1597733
https://epubs.siam.org/doi/abs/10.1137/23M1597733?af=R
© 2024 Society for Industrial and Applied Mathematics

Kronecker Product of Tensors and Hypergraphs: Structure and Dynamics
https://epubs.siam.org/doi/abs/10.1137/23M1592547?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 16211642, September 2024. <br/> Abstract. Hypergraphs and tensors extend classic graph and matrix theories to account for multiway relationships, which are ubiquitous in engineering, biological, and social systems. While the Kronecker product is a potent tool for analyzing the coupling of systems in a graph or matrix context, its utility in studying multiway interactions, such as those represented by tensors and hypergraphs, remains elusive. In this article, we present a comprehensive exploration of algebraic, structural, and spectral properties of the tensor Kronecker product. We express Tucker and tensor train decompositions and various tensor eigenvalues in terms of the tensor Kronecker product. Additionally, we utilize the tensor Kronecker product to form Kronecker hypergraphs, which are tensorbased hypergraph products, and investigate the structure and stability of polynomial dynamics on Kronecker hypergraphs. Finally, we provide numerical examples to demonstrate the utility of the tensor Kronecker product in computing Zeigenvalues, performing various tensor decompositions, and determining the stability of polynomial systems.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 16211642, September 2024. <br/> Abstract. Hypergraphs and tensors extend classic graph and matrix theories to account for multiway relationships, which are ubiquitous in engineering, biological, and social systems. While the Kronecker product is a potent tool for analyzing the coupling of systems in a graph or matrix context, its utility in studying multiway interactions, such as those represented by tensors and hypergraphs, remains elusive. In this article, we present a comprehensive exploration of algebraic, structural, and spectral properties of the tensor Kronecker product. We express Tucker and tensor train decompositions and various tensor eigenvalues in terms of the tensor Kronecker product. Additionally, we utilize the tensor Kronecker product to form Kronecker hypergraphs, which are tensorbased hypergraph products, and investigate the structure and stability of polynomial dynamics on Kronecker hypergraphs. Finally, we provide numerical examples to demonstrate the utility of the tensor Kronecker product in computing Zeigenvalues, performing various tensor decompositions, and determining the stability of polynomial systems. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
Kronecker Product of Tensors and Hypergraphs: Structure and Dynamics
10.1137/23M1592547
SIAM Journal on Matrix Analysis and Applications
20240903T07:00:00Z
© 2024 SIAM. Published by SIAM under the terms of the Creative Commons 4.0 license
Joshua Pickard
Can Chen
Cooper Stansbury
Amit Surana
Anthony M. Bloch
Indika Rajapakse
Kronecker Product of Tensors and Hypergraphs: Structure and Dynamics
45
3
1621
1642
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1592547
https://epubs.siam.org/doi/abs/10.1137/23M1592547?af=R
© 2024 SIAM. Published by SIAM under the terms of the Creative Commons 4.0 license

Multichannel Frequency Estimation with Constant Amplitude via Convex Structured LowRank Approximation
https://epubs.siam.org/doi/abs/10.1137/23M1587737?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 16431668, September 2024. <br/> Abstract. We study the problem of estimating the frequencies of several complex sinusoids with constant amplitude (CA) (also called constant modulus) from multichannel signals of their superposition. To exploit the CA property for frequency estimation in the framework of atomic norm minimization (ANM), we introduce multiple positivesemidenite block matrices composed of Hankel and Toeplitz submatrices and formulate the ANM problem as a convex structured lowrank approximation (SLRA) problem. The proposed SLRA is a semidenite programming and has substantial differences from existing such formulations without using the CA property. The proposed approach is termed as SLRAbased ANM for CA frequency estimation (SACA). We provide theoretical guarantees and extensive simulations that validate the advantages of SACA.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 16431668, September 2024. <br/> Abstract. We study the problem of estimating the frequencies of several complex sinusoids with constant amplitude (CA) (also called constant modulus) from multichannel signals of their superposition. To exploit the CA property for frequency estimation in the framework of atomic norm minimization (ANM), we introduce multiple positivesemidenite block matrices composed of Hankel and Toeplitz submatrices and formulate the ANM problem as a convex structured lowrank approximation (SLRA) problem. The proposed SLRA is a semidenite programming and has substantial differences from existing such formulations without using the CA property. The proposed approach is termed as SLRAbased ANM for CA frequency estimation (SACA). We provide theoretical guarantees and extensive simulations that validate the advantages of SACA. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
Multichannel Frequency Estimation with Constant Amplitude via Convex Structured LowRank Approximation
10.1137/23M1587737
SIAM Journal on Matrix Analysis and Applications
20240903T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Xunmeng Wu
Zai Yang
Zongben Xu
Multichannel Frequency Estimation with Constant Amplitude via Convex Structured LowRank Approximation
45
3
1643
1668
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1587737
https://epubs.siam.org/doi/abs/10.1137/23M1587737?af=R
© 2024 Society for Industrial and Applied Mathematics

LowRank Plus Diagonal Approximations for RiccatiLike Matrix Differential Equations
https://epubs.siam.org/doi/abs/10.1137/23M1587610?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 16691688, September 2024. <br/> Abstract. We consider the problem of computing tractable approximations of timedependent [math] large positive semidefinite (PSD) matrices defined as solutions of a matrix differential equation. We propose to use “lowrank plus diagonal” PSD matrices as approximations that can be stored with a memory cost being linear in the high dimension [math]. To constrain the solution of the differential equation to remain in that subset, we project the derivative at all times onto the tangent space to the subset, following the methodology of dynamical lowrank approximation. We derive a closedform formula for the projection and show that after some manipulations, it can be computed with a numerical cost being linear in [math], allowing for tractable implementation. Contrary to previous approaches based on pure lowrank approximations, the addition of the diagonal term allows for our approximations to be invertible matrices that can moreover be inverted with linear cost in [math]. We apply the technique to Riccatilike equations, then to two particular problems: first, a lowrank approximation to our recent Wasserstein gradient flow for Gaussian approximation of posterior distributions in approximate Bayesian inference and, second, a novel lowrank approximation of the Kalman filter for highdimensional systems. Numerical simulations illustrate the results.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 16691688, September 2024. <br/> Abstract. We consider the problem of computing tractable approximations of timedependent [math] large positive semidefinite (PSD) matrices defined as solutions of a matrix differential equation. We propose to use “lowrank plus diagonal” PSD matrices as approximations that can be stored with a memory cost being linear in the high dimension [math]. To constrain the solution of the differential equation to remain in that subset, we project the derivative at all times onto the tangent space to the subset, following the methodology of dynamical lowrank approximation. We derive a closedform formula for the projection and show that after some manipulations, it can be computed with a numerical cost being linear in [math], allowing for tractable implementation. Contrary to previous approaches based on pure lowrank approximations, the addition of the diagonal term allows for our approximations to be invertible matrices that can moreover be inverted with linear cost in [math]. We apply the technique to Riccatilike equations, then to two particular problems: first, a lowrank approximation to our recent Wasserstein gradient flow for Gaussian approximation of posterior distributions in approximate Bayesian inference and, second, a novel lowrank approximation of the Kalman filter for highdimensional systems. Numerical simulations illustrate the results. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
LowRank Plus Diagonal Approximations for RiccatiLike Matrix Differential Equations
10.1137/23M1587610
SIAM Journal on Matrix Analysis and Applications
20240906T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Silvère Bonnabel
Marc Lambert
Francis Bach
LowRank Plus Diagonal Approximations for RiccatiLike Matrix Differential Equations
45
3
1669
1688
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1587610
https://epubs.siam.org/doi/abs/10.1137/23M1587610?af=R
© 2024 Society for Industrial and Applied Mathematics

On Substochastic Inverse Eigenvalue Problems with the Corresponding Eigenvector Constraints
https://epubs.siam.org/doi/abs/10.1137/23M1547305?af=R
SIAM Journal on Matrix Analysis and Applications, <a href="https://epubs.siam.org/toc/sjmael/45/3">Volume 45, Issue 3</a>, Page 16891719, September 2024. <br/> Abstract. We consider the inverse eigenvalue problem of constructing a substochastic matrix from the given spectrum parameters with the corresponding eigenvector constraints. This substochastic inverse eigenvalue problem (SstIEP) with the specific eigenvector constraints is formulated into a nonconvex optimization problem (NcOP). The solvability for SstIEP with the specific eigenvector constraints is equivalent to identifying the attainability of a zero optimal value for the formulated NcOP. When the optimal objective value is zero, the corresponding optimal solution to the formulated NcOP is just the substochastic matrix that we wish to construct. We develop the alternating minimization algorithm to solve the formulated NcOP, and its convergence is established by developing a novel method to obtain the boundedness of the optimal solution. Some numerical experiments are conducted to demonstrate the efficiency of the proposed method.
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 3, Page 16891719, September 2024. <br/> Abstract. We consider the inverse eigenvalue problem of constructing a substochastic matrix from the given spectrum parameters with the corresponding eigenvector constraints. This substochastic inverse eigenvalue problem (SstIEP) with the specific eigenvector constraints is formulated into a nonconvex optimization problem (NcOP). The solvability for SstIEP with the specific eigenvector constraints is equivalent to identifying the attainability of a zero optimal value for the formulated NcOP. When the optimal objective value is zero, the corresponding optimal solution to the formulated NcOP is just the substochastic matrix that we wish to construct. We develop the alternating minimization algorithm to solve the formulated NcOP, and its convergence is established by developing a novel method to obtain the boundedness of the optimal solution. Some numerical experiments are conducted to demonstrate the efficiency of the proposed method. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/journals/covergifs/sjmael/cover.jpg" alttext="cover image"/></p>
On Substochastic Inverse Eigenvalue Problems with the Corresponding Eigenvector Constraints
10.1137/23M1547305
SIAM Journal on Matrix Analysis and Applications
20240909T07:00:00Z
© 2024 Society for Industrial and Applied Mathematics
Yujie Liu
Dacheng Yao
Hanqin Zhang
On Substochastic Inverse Eigenvalue Problems with the Corresponding Eigenvector Constraints
45
3
1689
1719
20240930T07:00:00Z
20240930T07:00:00Z
10.1137/23M1547305
https://epubs.siam.org/doi/abs/10.1137/23M1547305?af=R
© 2024 Society for Industrial and Applied Mathematics