Skip to main content

Spectrum adaptive spectral densities

Project description

spectral_density: a python package for computing spectral density approximations

spectral_density is a package for producing approximations to densities of states and spectral sums. The aim is to provide a simple tool for approximating spectral densities. The design paradigm for spectral_density is that computation and approximation should be decoupled. In particular, approximations are obtained in two steps:

  1. computation: repeatedly run the Lanczos algorithm on the matrix of interest with random starting vectors
  2. approximation: use the output of the previous step to obtain spectral density approximations This package focuses only on the second step. Thus, users are free to use any Lanczos implementation for the first step (reorthogonalization is not necessary!). This allows the use of implementations optimized to individual computational environments.

Compared to past implementations of the kernel polynomial method (KPM), which merge the two steps, this approach allows increased flexibility. We do not require any hyperparameters (such as the largest and smallest eigenvalues) to be specified ahead of time, and can produces KPM approxiamtions corresopnding to lots of different reference densities.

Citing spectral_density

Please consider referencing:

  • This package: spectral_density
  • A spectrum adaptive kernel polynomial method, Tyler Chen, J. Chem. Phys. 159, 114101 (2023) doi: 10.1063/5.0166678

Mathematical Background

Spectral Densities

Given a symmetric matrix $\mathbf{H}$ with eigenvalue decomposition $\mathbf{H} = \sum\limits_{n=1}^{d} \lambda_n \mathbf{u}_n \mathbf{u}_n^\textsf{T}$, the spectral density, also called the density of states (DOS), is the function

$$\rho(x) = \sum_{n=1}^d \frac{1}{d} \delta(x-\lambda_i),$$

where $\delta(x)$ is a unit-mass Dirac delta distribution centered at zero. Note that exact knowledge of the DOS $\rho(x)$ is equivalent to knowing all of the eigenvalues of $\mathbf{H}$, which are often prohibitively expensive to compute.

Given a unit vector $\mathbf{v}$, the local density of states (LDOS) is defined as

$$ \hat{\rho}(x) = \sum_{n=1}^d (\mathbf{v}^{\mathsf{T}}\mathbf{u}_n)^2 \delta(x-\lambda_i). $$

The LDOS is also prohibitively expensive to compute, but we can compute it's moments:

$$ \int x^n \hat{\rho}(x) = \mathbf{v}^{\mathsf{T}} \mathbf{H}^n \mathbf{v}. $$

In particular, using $k$ matrix-vector products, we can compute the moments through degree $2k$.

Stochastic Estimation

If $\mathbf{v}$ is a random vector drawn from a distribution so that $\mathbb{E}[\mathbf{v}\mathbf{v}^{\mathsf{T}}] = d^{-1} \mathbf{I}$, then it is not hard to verify that

$$ \mathbb{E}[\hat{\rho}(x)] = \rho(x). $$

We can improve the variance by averaging $m$ independent copies of $\hat{\rho}(x)$ together.

SLQ approximation

Stochastic Lanczos Quadrature (SLQ) is a widely used method for approximating spectral densities. Given a unit vector $\mathbf{v}$, the algorithm works by computing a $k$-point Gauss quadrature approxiamtion $\rho_{\text{SLQ}}(x)$ to $\hat{\rho}(x)$. These approximations are then averaged over copies of $\mathbf{v}$.

The $k$-point SLQ approximation is a discrete distribution with the form

$$ \rho_{\text{SLQ}}(x) = \sum_{n=1}^{k} \omega_n^2 \delta(x-\theta_n). $$

The weights ${\omega_n^2}$ and the support ${\theta_n}$ can be obtained from the recurrence coefficient generated by the Lanczos algorithm. The support ${\theta_n}$ are commonly known as the Ritz values. The Ritz values are sometimes viewed as approximations to the eigenvalues of $\mathbf{H}$.

Stability

In finite precision arithmetic, the Lanczos algorithm can behave very differently than in exact arithmetic. On noticible effect is the appearence of multiple "ghost" Ritz values approximating a single eigenvalue. If we think of the Ritz values ${\theta_n}$ as approximating the eigenvalues ${\lambda_n}$, this is problematic. However, it is actually better to think of the entire approximation $\rho_{\text{SLQ}}(x)$ which actually still converges to $\hat{\rho}(x)$ even in finite precision arithmetic.

KPM approximation

The Kernel Polynomial Method (KPM) is another widely used method for approximating spectral densities.

The aim of the Kernel Polynomial Method (KPM) is to produce a "density" $\rho_{\text{KPM}}(x)$ approximating $\hat{\rho}(x)$. Towards this end, let $\sigma(x)$ be a fixed reference density and expand $\hat{\rho}(x)/\sigma(x)$ as a formal polynomial series

$$ \frac{\hat{\rho}(x)}{\sigma(x)} = \sum_{n=0}^{\infty} \mu_n p_n(x), $$

where ${p_n}$ are the orthonormal polynomials with respect to $\sigma(x)$. Using the orthonormality of the ${p_n}$ with respect to $\sigma(x)$, we can compute $\mu_n$ by

$$ \mu_n = \int \sigma(x) \frac{\hat{\rho}(x)}{\sigma(x)} p_n(x) \mathrm{d} E = \int \hat{\rho}(x) p_n(x) \mathrm{d} E = \langle \mathbf{v} | p_n(\mathbf{H}) | \mathbf{v} \rangle. $$

Thus, we see the ${\mu_n}$ are the so-called (modified) moments of $\hat{\rho}(x)$ with respect to the orthogonal polynomials of $\sigma(x)$ and can be computed without explicit knowledge of $\hat{\rho}(x)$ using the expression above.

Computing the first $s$ moments naturally gives an approximation $\rho_{\text{KPM}}(x)$ to $\hat{\rho}(x)$ defined by

$$ \rho_{\text{KPM}}(x) = \sigma(x) \sum_{n=0}^{s} \mu_n p_n(x). $$

When the degree of the approixmation $s{\mathsf{T}}o\infty$,

$$ \rho_{\text{KPM}}(x) {\mathsf{T}}o \sigma(x) \sum_{n=0}^{\infty} \mu_n p_n(x) = \hat{\rho}(x). $$

Typically, one chooses the reference density $\sigma(x) \propto 1/\sqrt{1-x^2}$ to be the orthogonality measure for the Chebyshev polynomials. In this case, the modified moments $\mathbf{v}^{\mathsf{T}} \mathbf{T}_n (\mathbf{H}) \mathbf{v}$ can be computed by computing Chebyshev polynomials in $\mathbf{H}$ applied to $\mathbf{v}$. For such an approach to work, it is critical that $\mathbf{H}$ is scaled so that the spectrum is contained in $[-1,1]$.

A spectrum adaptive KPM[^1]

The main computational cost of the KPM is computing the modified moments ${\mu_n}$. In typical implementations of the KPM, one must first specify an interval $[a,b]$ containing the intervals, and then compute the moments. However, it is actually possible to use the information generated by Lanczos to efficiently compute these moments.[^1] This observation provides increased flexibility; the reference density, including its support, can be chosen after the expensive part of the computation has been run. In particular, it opens the possibility of using more exotic reference densities.

[^1]: A spectrum adaptive kernel polynomial method, Tyler Chen, J. Chem. Phys. 159, 114101 (2023) doi: 10.1063/5.0166678

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spectral_density-0.1.0.tar.gz (9.7 kB view hashes)

Uploaded Source

Built Distribution

spectral_density-0.1.0-py3-none-any.whl (9.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page