Krylov subspace methosd are among the most widely used class of algorithms for solving Ax=b when A is a very large-sparse matrix.
These methods work by constructing the Krylov subspace
Kk(A,b):=span{b,Ab,…,Ak−1b}.
This page focuses on the Lanczos method for matrix function approximation (Lanczos-FA) used to approximate A−1b.
The Lanczos-FA iterate is defined as
lank(1/x):=∥b∥2Qf(T)e1,
where Q=[q1,…,qk] is an orthonormal basis for Kk(A,b) such that span{q1,…,qj}=Kj(A,b) for all j≤k, and T:=QTAQ.
Positive definite systems
In the case that A is positive definite, then the Lanczos-FA iterate is equivalent to the well-known conjugate gradient algorithm.
The simplest proof of this amounts to showing the Lanczos-FA iterate satisfies the same optimality property as CG.
Proof.
An arbitrary vector in Kk(A,b) can be written Qc for some x∈Rk.
Thus,
x∈Kk(A,b)min∥A−1b−x∥A=c∈Rkmin∥A−1b−Qc∥A.=c∈Rkmin∥A1/2(A−1b−Qc)∥2.=c∈Rkmin∥A−1/2b−A1/2Qc∥2.
Writing the solution to the normal equations, we find that the above equations are minimized for
c=((A1/2Q)T(A1/2Q))−1(A1/2Q)T(A−1/2b)=(QTAQ)−1QTb=∥b∥2T−1e1.
Thus, we have solution
x=Qc=∥b∥2QT−1e1=lank(1/x).
This proves the theorem.□
This optimality property allows us to derive a number of prior bounds for the convergence of Lanczos-FA (and equivalently CG).
Indefinite systems
If A is not positive definite, T may have an eigenvalue at or near to zero and the error of the Lanczos-FA approximation to A−1b can be arbitrarily large.
The MINRES iterates are defined as
y^k:=y∈Kk(A,b)argmin∥b−Ay∥2=y∈Kk(A,b)argmin∥A−1b−y∥A2.
Define the residual vectors
rk:=b−Alank(1/x),rkM:=b−Ay^k,
and note that the MINRES residual norms are non-increasing due to the optimality of the MINRES iterates.
In [1], it is shown that the CG residual norms are near the MINRES residual norms at iterations where MINRES makes good progress.
More precisely, the algorithms are related by
∥rk∥2=1−(∥rkM∥2/∥rk−1M∥2)2∥rkM∥2.
This bound says that when MINRES makes progress (∥rkM∥2≪∥rk−1M∥2) the CG residual is verys similar, but when MINRES stagnates the CG residual spikes.
However, it does not clear gurantee that the convergence of CG is similar to that of MINRES.
The following theorem from [2] does so.
To the best of our knowledge, the result is new.
Theorem.
For every k≥1,
0≤j≤kmin∥rjF∥2≤k+1⋅∥rkG∥2.
It also turns out the bound is sharp.
Theorem.
For every k≥1 and ε>0, there exists a matrix A and vector b for which
j≤kmin∥rjF∥2≥(k+1−ε)⋅∥rkG∥2.
References
1. Cullum, J.; Greenbaum, A. Relations Between Galerkin and Norm-Minimizing Iterative Methods for Solving Linear Systems. SIAM Journal on Matrix Analysis and Applications1996, 17, 223–247, doi:10.1137/S0895479893246765.
2. Chen, T.; Meurant, G. Near-Optimal Convergence of the Full Orthogonalization Method 2024.