Numerical computation of the equilibrium-reduced density matrix for strongly coupled open quantum systems

Tyler Chen

This is a companion piece to

Code to replicate the figures in the corresopnding paper can be found on Github.


A quantum system is described by a Hamiltonian matrix H\mathbf{H}. When the system is at thermal equilibrium at temperature 1/β1/\beta, then the state of the system is described by the matrix ρ=exp(βH)/Z(β),Z(β)=tr(exp(βH)). \bm{\rho} = \exp(-\beta \mathbf{H}) / Z(\beta) ,\qquad Z(\beta) = \operatorname{tr}(\exp(-\beta \mathbf{H})). The function Z(β)Z(\beta) is called the partition function, and gives us access to all kinds of thermodynamic quantities such as the energy, specific heat, magnetization, etc.

Often we are interested in the state of some subsystem of the total system. This is represented by ρ(β)=trb(βH)/Z(β), \bm{\rho}^*(\beta) = \operatorname{tr}_{\mathrm{b}}(-\beta \mathbf{H}) / Z(\beta), where trb()\operatorname{tr}_{\mathrm{b}}(\cdot) is the partial trace described below.

Exponentially big linear algebra

Computing the trace (and partial trace) of a given matrix is trivial. For the trace you just sum the diagonal entries, and for the partial trace you do something similarly easily. However, we are starting with H\mathbf{H} not exp(βH)\exp(-\beta \mathbf{H}), and this presents some issues.

For a system with NN sites, the size of H\mathbf{H} is 2N2^N. This means that even for reasonably small NN, storing a N×NN\times N matrix is impossible. For instance, if N=20N=20, then 2N2^N is about one million, and storing a N×NN\times N matrix of double precision floating point numbers (64 bits per number) would require over 8 terrabtyes of memory! Increasing NN by one quadruples the memory costs, so even if we have 8 terrabytes of memory, we’ll quickly run out for NN even just a bit larger.

Fortunately, H\mathbf{H} typically has a very sparse representation, often with just O(N)O(N) nonzero entries. While this means we can store H\mathbf{H} itself, exp(βH)\exp(-\beta \mathbf{H}) is typically not sparse and therefore it is intractable to store. Moreover, even if we would store exp(βH)\exp(-\beta \mathbf{H}), computing it would be a big task!

Stochastic trace estimation

There are lots of algorithms for estimating vTf(H)v\mathbf{v}^\mathsf{T}f(\mathbf{H})\mathbf{v} using only matrix-vector products with H\mathbf{H}. For instance, the Lanczos method for matrix function approximation (Lanczos-FA) is a common choice.

There are lots of nice overviews on stochastic trace estimation, and this blog post by Ethan Epperly is especially good. For our purposes, we only need to know that if v\mathbf{v} is a vector for which E[vvT]=I\mathbb{E}[\mathbf{v} \mathbf{v}^\mathsf{T}] = \mathbf{I} (i.e. all the entries are uncorrelated), then E[vTAv]=tr(A). \mathbb{E}\big[\mathbf{v}^\mathsf{T}\mathbf{A} \mathbf{v}\big] = \operatorname{tr}(\mathbf{A}). A simple way to generate such a v\mathbf{v} is by taking each entry to be an independent standard normal random variable. We will call estimators of the form vTAv\mathbf{v}^\mathsf{T}\mathbf{A}\mathbf{v} a quadratic trace estimator.

Partial trace estimation

Suppose we can partition a matrix A\mathbf{A} as A=[A1,1A1,2A1,daA2,1A2,2A2,daAda,1Ada,2Ada,da]. \mathbf{A} = \begin{bmatrix} \mathbf{A}_{1,1} & \mathbf{A}_{1,2} & \cdots & \mathbf{A}_{1,d_{\mathrm{a}}} \\ \mathbf{A}_{2,1} & \mathbf{A}_{2,2} & \cdots & \mathbf{A}_{2,d_{\mathrm{a}}} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{A}_{d_{\mathrm{a}},1} & \mathbf{A}_{d_{\mathrm{a}},2} & \cdots & \mathbf{A}_{d_{\mathrm{a}},d_{\mathrm{a}}} \end{bmatrix}. Then the partial trace of A\mathbf{A} is defined as trb(A):=[tr(A1,1)tr(A1,2)tr(A1,da)tr(A2,1)tr(A2,2)tr(A2,da)tr(Ada,1)tr(Ada,2)tr(Ada,da)]. \operatorname{tr}_{\mathrm{b}}(\mathbf{A}) := \begin{bmatrix} \operatorname{tr}(\mathbf{A}_{1,1}) & \operatorname{tr}(\mathbf{A}_{1,2}) & \cdots & \operatorname{tr}(\mathbf{A}_{1,d_{\mathrm{a}}}) \\ \operatorname{tr}(\mathbf{A}_{2,1}) & \operatorname{tr}(\mathbf{A}_{2,2}) & \cdots & \operatorname{tr}(\mathbf{A}_{2,d_{\mathrm{a}}}) \\ \vdots & \vdots & \ddots & \vdots \\ \operatorname{tr}(\mathbf{A}_{d_{\mathrm{a}},1}) & \operatorname{tr}(\mathbf{A}_{d_{\mathrm{a}},2}) & \cdots & \operatorname{tr}(\mathbf{A}_{d_{\mathrm{a}},d_{\mathrm{a}}}) \end{bmatrix}.

Note that each entry of the partial trace is obtained by taking the trace of a certain matrix. This suggests we can apply use quadratic trace estimators to approximate each of the entries of the partial trace! Indeed, (Ida ⁣v)TA(Ida ⁣v)=[vTA1,1vvTA1,2vvTA1,davvTA2,1vvTA2,2vvTA2,davvTAda,1vvTAda,2vvTAda,dav] (\mathbf{I}_{d_{\mathrm{a}}} \! \otimes \mathbf{v})^\mathsf{T} \mathbf{A} (\mathbf{I}_{d_{\mathrm{a}}} \! \otimes \mathbf{v}) = \begin{bmatrix} \mathbf{v}^\mathsf{T}\mathbf{A}_{1,1} \mathbf{v} & \mathbf{v}^\mathsf{T}\mathbf{A}_{1,2} \mathbf{v} & \cdots & \mathbf{v}^\mathsf{T}\mathbf{A}_{1,d_{\mathrm{a}}} \mathbf{v} \\ \mathbf{v}^\mathsf{T}\mathbf{A}_{2,1} \mathbf{v} & \mathbf{v}^\mathsf{T}\mathbf{A}_{2,2} \mathbf{v} & \cdots & \mathbf{v}^\mathsf{T}\mathbf{A}_{2,d_{\mathrm{a}}} \mathbf{v} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{v}^\mathsf{T}\mathbf{A}_{d_{\mathrm{a}},1} \mathbf{v} & \mathbf{v}^\mathsf{T}\mathbf{A}_{d_{\mathrm{a}},2} \mathbf{v} & \cdots & \mathbf{v}^\mathsf{T}\mathbf{A}_{d_{\mathrm{a}},d_{\mathrm{a}}} \mathbf{v} \end{bmatrix} provides an unbiased estimator for trb(A)\operatorname{tr}_{\mathrm{b}}(\mathbf{A}).

Given independent and identically distributed copies v1,,vm\mathbf{v}_1, \ldots, \mathbf{v}_m of v\mathbf{v}, we arrive at an estimator tr^bm(A):=1mi=1m(Ida ⁣vi)TA(Ida ⁣vi). \widehat{\mathsf{tr}}_{\mathrm{b}}^{m}(\mathbf{A}) := \frac{1}{m} \sum_{i=1}^{m} (\mathbf{I}_{d_{\mathrm{a}}} \! \otimes \mathbf{v}_i)^\mathsf{T}\mathbf{A} (\mathbf{I}_{d_{\mathrm{a}}} \! \otimes \mathbf{v}_i).