# Numerical computation of the equilibrium-reduced density matrix for strongly coupled open quantum systems

This is a companion piece to https://doi.org/10.1063/5.0099761

Code to replicate the figures in the corresopnding paper can be found on Github.

## Introduction

A quantum system is described by a Hamiltonian matrix $\mathbf{H}$. When the system is at thermal equilibrium at temperature $1/\beta$, then the state of the system is described by the matrix $\bm{\rho} = \exp(-\beta \mathbf{H}) / Z(\beta) ,\qquad Z(\beta) = \operatorname{tr}(\exp(-\beta \mathbf{H})).$ The function $Z(\beta)$ is called the partition function, and gives us access to all kinds of thermodynamic quantities such as the energy, specific heat, magnetization, etc.

Often we are interested in the state of some subsystem of the total system. This is represented by $\bm{\rho}^*(\beta) = \operatorname{tr}_{\mathrm{b}}(-\beta \mathbf{H}) / Z(\beta),$ where $\operatorname{tr}_{\mathrm{b}}(\cdot)$ is the partial trace described below.

### Exponentially big linear algebra

Computing the trace (and partial trace) of a given matrix is trivial. For the trace you just sum the diagonal entries, and for the partial trace you do something similarly easily. However, we are starting with $\mathbf{H}$ not $\exp(-\beta \mathbf{H})$, and this presents some issues.

For a system with $N$ sites, the size of $\mathbf{H}$ is $2^N$. This means that even for reasonably small $N$, storing a $N\times N$ matrix is impossible. For instance, if $N=20$, then $2^N$ is about one million, and storing a $N\times N$ matrix of double precision floating point numbers (64 bits per number) would require over 8 terrabtyes of memory! Increasing $N$ by one quadruples the memory costs, so even if we have 8 terrabytes of memory, we’ll quickly run out for $N$ even just a bit larger.

Fortunately, $\mathbf{H}$ typically has a very sparse representation, often with just $O(N)$ nonzero entries. While this means we can store $\mathbf{H}$ itself, $\exp(-\beta \mathbf{H})$ is typically not sparse and therefore it is intractable to store. Moreover, even if we would store $\exp(-\beta \mathbf{H})$, computing it would be a big task!

### Stochastic trace estimation

There are lots of algorithms for estimating $\mathbf{v}^\mathsf{T}f(\mathbf{H})\mathbf{v}$ using only matrix-vector products with $\mathbf{H}$. For instance, the Lanczos method for matrix function approximation (Lanczos-FA) is a common choice.

There are lots of nice overviews on stochastic trace estimation, and this blog post by Ethan Epperly is especially good. For our purposes, we only need to know that if $\mathbf{v}$ is a vector for which $\mathbb{E}[\mathbf{v} \mathbf{v}^\mathsf{T}] = \mathbf{I}$ (i.e. all the entries are uncorrelated), then $\mathbb{E}\big[\mathbf{v}^\mathsf{T}\mathbf{A} \mathbf{v}\big] = \operatorname{tr}(\mathbf{A}).$ A simple way to generate such a $\mathbf{v}$ is by taking each entry to be an independent standard normal random variable. We will call estimators of the form $\mathbf{v}^\mathsf{T}\mathbf{A}\mathbf{v}$ a quadratic trace estimator.

## Partial trace estimation

Suppose we can partition a matrix $\mathbf{A}$ as $\mathbf{A} = \begin{bmatrix} \mathbf{A}_{1,1} & \mathbf{A}_{1,2} & \cdots & \mathbf{A}_{1,d_{\mathrm{a}}} \\ \mathbf{A}_{2,1} & \mathbf{A}_{2,2} & \cdots & \mathbf{A}_{2,d_{\mathrm{a}}} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{A}_{d_{\mathrm{a}},1} & \mathbf{A}_{d_{\mathrm{a}},2} & \cdots & \mathbf{A}_{d_{\mathrm{a}},d_{\mathrm{a}}} \end{bmatrix}.$ Then the partial trace of $\mathbf{A}$ is defined as $\operatorname{tr}_{\mathrm{b}}(\mathbf{A}) := \begin{bmatrix} \operatorname{tr}(\mathbf{A}_{1,1}) & \operatorname{tr}(\mathbf{A}_{1,2}) & \cdots & \operatorname{tr}(\mathbf{A}_{1,d_{\mathrm{a}}}) \\ \operatorname{tr}(\mathbf{A}_{2,1}) & \operatorname{tr}(\mathbf{A}_{2,2}) & \cdots & \operatorname{tr}(\mathbf{A}_{2,d_{\mathrm{a}}}) \\ \vdots & \vdots & \ddots & \vdots \\ \operatorname{tr}(\mathbf{A}_{d_{\mathrm{a}},1}) & \operatorname{tr}(\mathbf{A}_{d_{\mathrm{a}},2}) & \cdots & \operatorname{tr}(\mathbf{A}_{d_{\mathrm{a}},d_{\mathrm{a}}}) \end{bmatrix}.$

Note that each entry of the partial trace is obtained by taking the trace of a certain matrix. This suggests we can apply use quadratic trace estimators to approximate each of the entries of the partial trace! Indeed, $(\mathbf{I}_{d_{\mathrm{a}}} \! \otimes \mathbf{v})^\mathsf{T} \mathbf{A} (\mathbf{I}_{d_{\mathrm{a}}} \! \otimes \mathbf{v}) = \begin{bmatrix} \mathbf{v}^\mathsf{T}\mathbf{A}_{1,1} \mathbf{v} & \mathbf{v}^\mathsf{T}\mathbf{A}_{1,2} \mathbf{v} & \cdots & \mathbf{v}^\mathsf{T}\mathbf{A}_{1,d_{\mathrm{a}}} \mathbf{v} \\ \mathbf{v}^\mathsf{T}\mathbf{A}_{2,1} \mathbf{v} & \mathbf{v}^\mathsf{T}\mathbf{A}_{2,2} \mathbf{v} & \cdots & \mathbf{v}^\mathsf{T}\mathbf{A}_{2,d_{\mathrm{a}}} \mathbf{v} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{v}^\mathsf{T}\mathbf{A}_{d_{\mathrm{a}},1} \mathbf{v} & \mathbf{v}^\mathsf{T}\mathbf{A}_{d_{\mathrm{a}},2} \mathbf{v} & \cdots & \mathbf{v}^\mathsf{T}\mathbf{A}_{d_{\mathrm{a}},d_{\mathrm{a}}} \mathbf{v} \end{bmatrix}$ provides an unbiased estimator for $\operatorname{tr}_{\mathrm{b}}(\mathbf{A})$.

Given independent and identically distributed copies $\mathbf{v}_1, \ldots, \mathbf{v}_m$ of $\mathbf{v}$, we arrive at an estimator $\widehat{\mathsf{tr}}_{\mathrm{b}}^{m}(\mathbf{A}) := \frac{1}{m} \sum_{i=1}^{m} (\mathbf{I}_{d_{\mathrm{a}}} \! \otimes \mathbf{v}_i)^\mathsf{T}\mathbf{A} (\mathbf{I}_{d_{\mathrm{a}}} \! \otimes \mathbf{v}_i).$