The above definition is sometimes called a reduced or thin SVD.
By extending U to an orthonormal basis for Rn and appending zeros below Σ we obtain a full SVD.
The SVD provides the best low-rank approximation to a matrix.
A problem/task is said to be well-conditioned if small perturbations in problem don’t change the solution by much.
If a small change to the problem can cause a large change in the solution, then the problem is said to be ill-conditioned.
Ill-conditioned problems are difficult (or impossible) to solve accurately on computers, since even small errors made representing the problem on a computer can drastically change the solution.
In numerical linear algebra the condition-number of a matrix A is defined as
cond(A):=σmin(A)σmax(A),
where σmax(A) and σmin(A) are the largest and smallest singular values of A, respectively.
The conditioning of many core linear algebra tasks depends on the condition number.
Krylov subspace methods are an important and powerful class of algorithms in numerical linear algebra.
Such algorithms make use of the information in the so-called Krylov subspace.
Computationally, it is useful to obtain an orthonormal basis for relevant subspaces. In the case of Krylov subspaces, this can be done by the Arnoldi algorithm.
When applied to a symmetric matrix A for k iterations, the arnoldi algorithm produces an orthonormal basis Q∈Rn×k for the Krylov subspace Kk(A,x) and an upper-Hessenberg matrix H∈Rk×k such that H=QTAQ.
In the special case that A is symmetric, we observe that QTAQ is also symmetric.
This means H is both upper-Hessenberg and symmetric, and thus tridiagonal.
This allows substantial simplifications to the Arnoldi algorithm, which is then called the Lanczos algorithm.