Computational approaches to today’s most pressing and world-changing questions are reliant on subroutines for fundamental linear algebraic tasks. There is a lot of work on the design and analysis of algorithms for an increasingly prevalent subset of such tasks: those involving matrix functions of Hermitian (or real symmetric) matrices. Here will be a Hermitian matrix with eigenvalues and (orthonormal) eigenvectors ; i.e., A matrix function transforms the eigenvalues of a Hermitian (or symmetric) matrix according to some scalar function, while leaving the eigenvectors untouched.
The matrix function , induced by and , is defined as
Perhaps the most well known example of a matrix function is the matrix inverse , which corresponds to the inverse function . Other common matrix functions including the matrix sign, logarithm, exponential, square root, and inverse square root, each of which has many applications throughout the mathematical sciences.
A common task involving matrix functions is computing the product of a matrix function with a fixed vector ; for instance, the matrix inverse applied to a vector corresponds to the solution of a linear system of equations. Beyond the multitude of applications of linear systems, matrix functions applied to vectors are used for computing the overlap operator in quantum chromodynamics , solving differential equations in applied math , Gaussian process sampling in statistics , principle component projection and regression in data science , and a range of other applications .
Another related and especially interesting task involving matrix functions is estimating the , Applications of spectral sums include characterizing the degree of protein folding in biology , studying the thermodynamics of spin systems in quantum physics and chemistry , benchmarking quantum devices in quantum information theory , maximum likelihood estimation in statistics , designing better public transit in urban planning , and finding triangle counts and other structure in network science .
The trace of matrix functions is intimately related to the spectral measure of which encodes the eigenvalues of .
The cumulative empirical spectral measure (CESM) , induced by , is defined by Here and .
Not only is itself a spectral sum for each , but In this sense, approximating the CESM is equivalent to approximating spectral sums. However, approximations to are also useful in that they provide a global picture of the spectrum of . Such coarse grained approximations are used in electronic structure computations and other tasks in physics1 , probing the behavior of neural networks in machine learning , load balancing modern parallel eigensolvers in numerical linear algebra , and computing the product of matrix functions with vectors .
In physics, the ``density’’ is often called the density of states (DOS).↩︎