Skip to main content

A package for Tensor-Networks Simple-Update simulations of quantum wave functions representations

Project description

Tensor Networks Simple-Update (SU) Algorithm

This python package contains an implementation of the Simple-Update Tensor Network algorithm as described in the paper - A universal tensor network algorithm for any infinite lattice by Saeed S. Jahromi and Roman Orus [1].

Installation

pip3 install tnsu

Documentation

For details about the tnsu package, see the github repo in this link.

Simple Update

Simple Update (SU) is a Tensor Networks (TN) algorithm used for finding ground-state Tensor Network representations of gapped local Hamiltonians. It is the TN most efficient and least accurate algorithm for computing ground states. However, it is able to capture many interesting non-trivial phenomena in n-D quantum spin-lattice physics. The algorithm is based on an Imaginary Time-Evolution (ITE) scheme, where the ground-state of a given Hamiltonian can be obtained following the next relation

In order to actually use the time-evolution method in TN we need to break down the time-evolution operator into local terms. We do that with the help of the Suzuki-Trotter expansion. Specifically for Projected Entangled Pair States (PEPS) TN, each local term corresponds to a single pair of tensors. The Suzuki-Trotter approximation steps for ITE are as follows

where finally,

When performing the ITE scheme, the TN virtual bond dimension increases. Therefore, after every few ITE iterations, we need to truncate the bond dimensions so the number of parameters in the tensor network state would stay bounded. The truncation step is implemented via a Singular Value Decomposition (SVD) step. A full step-by-step illustrated description of the Simple Update algorithm (which is based on the ITE scheme) is depicted below.

For a more comprehensive explanation of the algorithm, the interested reader should check out [1].

The Code

The src.tnsu folder contains the source code for this project

# file Subject
1 tensor_network.py a Tensor Network class object which tracks the tensors, weights, and their connectivity
2 simple_update.py a Tensor Network Simple-Update algorithm class, which gets as an input a TensorNetwork object and performs a simple-update run on it using Imaginary Time Evolution.
3 structure_matrix_constructor.py Contains a dictionary of common iPEPS structure matrices and also functionality construction of 2D square and rectangular lattices structure matrices (still in progress).
4 examples.py Few scripts for loading a tensor network state from memory and a full Antiferromagnetic Heisenberg model PEPS experiment.
5 ncon.py A module for tensors contraction in python copied from the ncon GitHub repository.
6 utils.py A general utility module.

Examples

Example 1: Spin 1/2 2D star lattice iPEPS Antiferromagnetic Heisenberg model simulation

Importing files

import numpy as np
from tnsu.tensor_network import TensorNetwork
import tnsu.simple_update as su
import structure_matrix_constructor as stmc

First let us get the iPEPS star structure matrix

smat = stmc.infinite_structure_matrix_dict('star')

Next we initialize a random Tensor Network with a virtual bond dimension of size 2 and a physical spin dimension also of size 2

tensornet = TensorNetwork(structure_matrix=smat, virtual_size=2, spin_dim=2)

Then, set up the spin 1/2 operators and the simple update class parameters

# pauli matrices
pauli_x = np.array([[0, 1],
                    [1, 0]])
pauli_y = np.array([[0, -1j],
                    [1j, 0]])
pauli_z = np.array([[1., 0],
                    [0, -1]])
# ITE time constants
dts = [0.1, 0.01, 0.001, 0.0001, 0.00001]

# Local spin operators
s = [pauli_x / 2., pauli_y / 2., pauli_z / 2.]

# The Hamiltonian's 2-body interaction constants 
j_ij = [1., 1., 1., 1., 1., 1.]

# The Hamiltonian's 1-body field constant
h_k = 0.

# The field-spin operators (which are empty in that example)
s_k = []

# The maximal virtual bond dimension (used for SU truncation)
d_max = 2

Now, we initialize the simple update class

star_su = su.SimpleUpdate(tensor_network=tensornet, 
                          dts=dts, 
                          j_ij=j_ij, 
                          h_k=h_k, 
                          s_i=s, 
                          s_j=s, 
                          s_k=s_k, 
                          d_max=d_max, 
                          max_iterations=200, 
                          convergence_error=1e-6, 
                          log_energy=False,
                          print_process=False)

and run the algorithm

star_su.run()

It is also possible to compute single and double-site expectation values like energy, magnetization etc, with the following

energy_per_site = star_su.energy_per_site()
z_magnetization_per_site = star_su.expectation_per_site(operator=pauli_z / 2)

or manually calculating single and double-site reduced-density matrices and expectations following the next few lines of code

tensor = 0
edge = 1
tensor_pair_operator = np.reshape(np.kron(pauli_z / 2, pauli_z / 2), (2, 2, 2, 2))
star_su.tensor_rdm(tensor_index=tensor)
star_su.tensor_pair_rdm(common_edge=edge)
star_su.tensor_expectation(tensor_index=tensor, operator=pauli_z / 2)
star_su.tensor_pair_expectation(common_edge=edge, operator=tensor_pair_operator)

Example 2: The Trivial Simple-Update Algorithm

The trivial SU algorithm is equivalent to the SU algorithm without the ITE and truncation steps; it only consists of consecutive SVD steps over each TN edge (the same as contracting the ITE gate with zero time-step). The trivial-SU algorithm's fixed point corresponds to a canonical representation of the tensor network representations we started with. A tensor network canonical representation is strongly related to the Schmidt Decomposition operation over all the tensor network's edges, where for a tensor network with no loops (tree-like topology), each weight vector in the canonical representation corresponds to the Schmidt values of partitioning the network into two distinct networks along that edge. When the given tensor network has loops in it, it is no longer possible to partition the network along a single edge into distinguished parts. Therefore, the weight vectors are no longer equal to the Schmidt values but rather become some general approximation of the tensors' environments in the network. A very interesting property of the trivial simple update algorithm is that it is identical to the Belief Propagation (BP) algorithm. The Belief Propagation (BP) algorithm is a famous iterative-message-passing algorithm in the world of Probabilistic Graphical Models (PGM), which is used as an approximated inference tool. For a detailed description of the duality between the trivial-Simple-Update and the Belief Propagation algorithm, see Refs [3][4].

In order to implement the trivial-SU algorithm, we can initialize the simple update class with zero time step as follows

su.SimpleUpdate(tensor_network=tensornet, 
                dts=[0], 
                j_ij=j_ij, 
                h_k=0, 
                s_i=s, 
                s_j=s, 
                s_k=s_k, 
                d_max=d_max, 
                max_iterations=1000, 
                convergence_error=1e-6, 
                log_energy=False,
                print_process=False)

then, the algorithm will run 1000 iterations or until the maximal L2 distance between temporal consecutive weight vectors is smaller than 1e-6.

There are more fully-written examples in the notebooks folder.

List of Notebooks

The notebooks below are not part of the package, they can be found in the tnsu GitHub repository under /notebooks. You can run them locally with Jupiter notebook or in google colab (which is preferable in case you don't want to burn your laptop's mother-board :) )

# file Subject Colab Nbviewer
1 ipeps_energy_simulations.ipynb Computing ground-state energies of iPEPS Tensor Networks Open In Collab nbviewer
2 Quantum_Ising_Model_Phase_Transition.ipynb Simulating the phase transition of the Quantum Transverse Field Ising model Open In Collab nbviewer
3 Triangular_2d_lattice_BLBQ_Spin_1_simulation.ipynb Spin-1 BLBQ tringular 2D lattice phase transition Open In Collab nbviewer

Simulations

Spin-1/2 Antiferromagnetic Heisenberg (AFH) model

Below are some results of ground-state energy per-site simulated with the Simple Update algorithm over AFH Chain, Star, PEPS, and Cube tensor networks. The AFH Hamiltonian is given by

In the case of the Star tensor network lattice, the AFH Hamiltonian consists of two parts that correspond to different types of edges (see [1]). The Chain, Star, PEPS, and Cube infinite tensor networks are illustrated in the next figure.

Here is the ground-state energy per-site vs. inverse virtual bond-dimension simulations for the tensor networks diagrams above

Quantum Ising Model on a 2D Spin-1/2 Lattice

Next, we simulated the quantum Ising model on a 2D lattice with a transverse magnetic field. Its Hamiltonian is given by

In the plots below, one can see the simulated x and z magnetization (per site) along with the simulated energy (per site). We see that the SU algorithm is able to extract the phase transition of the model around h=3.2.

Spin-1 Simulation of a Bilinear-Biquadratic Heisenberg model on a star 2D lattice

Finally, we simulated the BLBQ Hamiltonian, which is given by the next equation

notice that for the 0-radian angle, this model coincides with the original AFH model. The energy, magnetization, and Q-norm as a function of the angle for a different bond dimension are plotted below. We can see that the simple-update algorithm is having a hard time tracing all the phase transitions of this model. However, we notice that for larger bond dimensions, it seems like it captures the general behavior of the model's phase transition. For a comprehensive explanation and results (for triangular lattice, see Ref [2])

References

  • [1] Saeed S. Jahromi, and Roman Orus - "A universal tensor network algorithm for any infinite lattice" (2019)
  • [2] Ido Niesen, Philippe Corboz - "A ground state study of the spin-1 bilinear-biquadratic Heisenberg model on the triangular lattice using tensor networks" (2018)
  • [3] Roy Alkabetz and Itai Arad - "Tensor networks contraction and the belief propagation algorithm" (2020)
  • [4] Roy Elkabetz - "Using the Belief Propagation algorithm for finding Tensor Networks approximations of many-body ground states" (2020)

Contact

Roy Elkabetz - elkabetzroy@gmail.com

Citation

To cite this repository in academic works or for any other purpose, please use the following BibTeX citation:

@misc{tnsu,
    author = "Elkabetz, Roy",
    title = "Python Package for Universal Tensor-Networks Simple-Update Simulations",
    howpublished = "\url{https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update}",
    url = "https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update/blob/main/tnsu__A_python_package_for_Tensor_Networks_Simple_Update_simulations.pdf",
    year = "2022",
    type = "Python package"
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tnsu-1.0.3.tar.gz (3.7 MB view hashes)

Uploaded Source

Built Distribution

tnsu-1.0.3-py3-none-any.whl (3.7 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page