This is a companion piece to https://doi.org/10.1063/5.0166678

Code to replicate the figures in the corresponding paper can be found on Github.

See also the spectral_density Python package.

Consider a Hermitian matrix $\mathbf{H}$ with eigenvalues $\{\lambda_i\}$ and eigenvectors $\{\mathbf{u}_i\}$. Given a vector $\mathbf{v}$, we may be interested in the Local Density of States $\rho(x) = \sum_{i=1}^{d} |\langle \mathbf{v} | \mathbf{u}_i \rangle|^2 \delta(x-\lambda_i).$

We can’t efficiently compute $\rho(x)$ exactly, as this is equivalent to determining all the eigenvalues. However, we can compute moments $\int x^n \rho(x) \mathrm{d}{x} = \langle \mathbf{v}|\mathbf{H}^n|\mathbf{v}\rangle$ for $n=0,2, \ldots, s$.

The aim of the Kernel Polynomial Method (KPM) is to produce a “density” $\rho_{\text{KPM}}(x)$ approximating $\hat{\rho}(x)$.
Towards this end, let $\sigma(x)$ be a fixed *reference density* and expand $\hat{\rho}(x)/\sigma(x)$ as a formal polynomial series
$\frac{\hat{\rho}(x)}{\sigma(x)}
= \sum_{n=0}^{\infty} \mu_n p_n(x),$
where $\{p_n\}$ are the orthonormal polynomials with respect to $\sigma(x)$.
Using the orthonormality of the $\{p_n\}$ with respect to $\sigma(x)$, we can compute $\mu_n$ by
$\mu_n
= \int \sigma(x) \frac{\hat{\rho}(x)}{\sigma(x)} p_n(x) \mathrm{d}E
= \int \hat{\rho}(x) p_n(x) \mathrm{d}E
= \langle \mathbf{v} | p_n(\mathbf{H}) | \mathbf{v} \rangle.$
Thus, we see the $\{\mu_n\}$ are the so-called (modified) moments of $\hat{\rho}(x)$ with respect to the orthogonal polynomials of $\sigma(x)$ and can be computed without explicit knowledge of $\hat{\rho}(x)$ using the expression above.

Computing the first $s$ moments naturally gives an approximation $\rho_{\text{KPM}}(x)$ to $\hat{\rho}(x)$ defined by $\rho_{\text{KPM}}(x) = \sigma(x) \sum_{n=0}^{s} \mu_n p_n(x).$ When the degree of the approixmation $s\to\infty$, $\rho_{\text{KPM}}(x) \to \sigma(x) \sum_{n=0}^{\infty} \mu_n p_n(x) = \hat{\rho}(x).$

Typically, one chooses the reference density $\sigma(x) \propto 1/\sqrt{1-x^2}$ to be the orthogonality measure for the Chebyshev polynomials. In this case, the modified moments $\langle \mathbf{v} | \mathbf{T}_n (\mathbf{H})| \mathbf{v}\rangle$ can be computed by computing Chebyshev polynomials in $\mathbf{H}$ applied to $\mathbf{v}$. For such an approach to work, it is critical that $\mathbf{H}$ is scaled so that the spectrum is contained in $[-1,1]$.

The Lanczos method for matrix function approximation (Lanczos-FA) can be used to approximate $f(\mathbf{A})\mathbf{b}$ (read more about this task here). For polynomials, the method is mathematically exact, and so we can use this method to compute $\langle \mathbf{v} | p_n(\mathbf{H}) | \mathbf{v} \rangle$. In the present paper, the key observation is the expensive part of this approach (running Lanczos) is completely independent of the choice of reference density $\sigma(x)$. This means we can obtain lots of different KPM approxiamtions for essentially free. This avoids the need for determining how to scale $\mathbf{H}$ a priori and also allows us to use more “exotic” reference densities.

There is a lot of concern that the Lanczos method is unstable. Another contribution of this paper is to try and spread awareness about the way the instabilities of Lanczos actually impact methods like Lanczos-FA. For example, [1] shows that the method accurately computes (bounded) polynomial moments!

1. Knizhnerman, L.A. The Simple Lanczos Procedure: Estimates of the Error of the Gauss Quadrature Formula and Their Applications. *Comput. Math. Math. Phys.* **1996**, *36*, 1481–1492.