Skip to main content

A package for Tensor-Networks Simple-Update simulations of quantum wave functions representations

Project description

Tensor Networks Simple-Update (SU) Algorithm

This repo contains an implementation of the Simple-Update Tensor Network algorithm as described in the paper - A universal tensor network algorithm for any infinite lattice by Saeed S. Jahromi and Roman Orus [1].

Simple Update

Simple Update (SU) is a Tensor Networks algorithm used for finding ground-state Tensor Network representations of gapped local Hamiltonians. It is the most efficient and the least accurate Tensor Networks algorithm. However, it is able to capture many interesting non-trivial phenomena in nD quantum spin-lattice physics. The algorithm is based on an Imaginary Time-Evolution (ITE) scheme, where the ground-state of a given Hamiltonian can be obtained following the next relation

In order to actually use the time-evolution method in Tensor Networks we need to break down the time-evolution operator into local terms. We do that with the Suzuki-Trotter expansion. Specifically for Projected Entangled Pair States (PEPS) Tensor Networks, each local term will be operating on a single pair of tensors. The Suzuki-Trotter approximation steps of breaking the time evolution operator is as follows

where finally

When performing the ITE scheme, the Tensor Network virtual bond dimension increases, therefore after every few time steps we need to truncate it so the number of parameters in the tensor network state would stay bounded. This truncation step is implemented via a Singular Value Decomposition (SVD) step. A full step-by-step illustrated description of the algorithm is depicted below.

For a more comprehensive explanation I'm encouraging the interested reader to check out [1], which is written great in a very coherent and clear language.

The Code

The src folder contains the code of this project

# file Subject
1 TensorNetwork.py a Tensor Network class object which tracks the tensors, weights and their connectivity
2 SimpleUpdate.py a Tensor Network Simple-Update algorithm class which get as an input a TensorNetwork object and perform a simple-update run on it using imaginary-time-evolution.
3 structure_matrix_generator.py This file containes a dictionary of common iPEPS structure matrices and also some functions for 2D square and rectangular lattices structure matrices (still in progress)

Example 1: Spin 1/2 2D star lattice iPEPS Antiferromagnetic Heisenberg model simulation

Importing files

import numpy as np
from TensorNetwork import TensorNetwork
import SimpleUpdate as su
import structure_matrix_generator as stmg

Get the iPEPS star structure matrix

smat = stmg.infinite_structure_matrix_dict('star')

Initialize a random Tensor Network with virtual bond dimension of size 2 and physical spin dimension also of size 2

tensornet = TensorNetwork(structure_matrix=smat, virtual_size=2, spin_dim=2)

Then, set the spin 1/2 operators and the simple update class parameters

# pauli matrices
pauli_x = np.array([[0, 1],
                    [1, 0]])
pauli_y = np.array([[0, -1j],
                    [1j, 0]])
pauli_z = np.array([[1., 0],
                    [0, -1]])
# su parameters                    
dts = [0.1, 0.01, 0.001, 0.0001, 0.00001]
s = [pauli_x / 2., pauli_y / 2., pauli_z / 2.]
j_ij = [1., 1., 1., 1., 1., 1.]
h_k = 0.
s_k = []
d_max = 2

Initialize the simple update class

star_su = su.SimpleUpdate(tensor_network=tensornet, 
                          dts=dts, 
                          j_ij=j_ij, 
                          h_k=0, 
                          s_i=s, 
                          s_j=s, 
                          s_k=s_k, 
                          d_max=d_max, 
                          max_iterations=200, 
                          convergence_error=1e-6, 
                          log_energy=False,
                          print_process=False)

and run the algorithm

star_su.run()

It is also possible to compute single and double site expectation values like energy, magnetizatoin etc

energy_per_site = star_su.energy_per_site()
z_magnetization_per_site = star_su.expectation_per_site(operator=pauli_z / 2)

or manually calculating single and double site reduced-density matrices and expectations using the next few functions

tensor = 0
edge = 1
tensor_pair_operator = np.reshape(np.kron(pauli_z / 2, pauli_z / 2), (2, 2, 2, 2))
star_su.tensor_rdm(tensor_index=tensor)
star_su.tensor_pair_rdm(common_edge=edge)
star_su.tensor_expectation(tensor_index=tensor, operator=pauli_z / 2)
star_su.tensor_pair_expectation(common_edge=edge, operator=tensor_pair_operator)
  • Few of the functions in my code are using the ncon module, which is a simple and efficient module for computing tensors contraction.

Example 2: The Trivial Simple-Update Algorithm

The trivial SU algorithm is equivalent to the SU algorithm without the ITE and truncation steps; it only consists of consecutive SVD steps on each edge (the same as contracting ITE gate with zero time-step), and its fixed point corresponds to a canonical representation of the Tensor Network we started with. A Tensor Network canonical representation is directly connected to the Schmidt Decomposition where for Tensor Networks with no loops (tree-like topology) each weight vector in the canonical representation corresponds to the Schmidt values of partitioning the network into two distinct networks along that edge. When the Tensor Network has loops, it is no longer possible to partition the network along a single edge. Therefore, the weight vectors become some general approximation of the tensors environments in the network. A very interesting property of the trivial simple update algorithm is that it is identical to the Belief Propagation (BP) algorithm which is a very famous algorithm in the world of Probabilistic Graphical Models (PGM) where it is used for solving inference problems. For a full description about the duality of the two algorithms see Refs [3][4].

In order to implement the trivial-SU algorithm we can initialize the simple update class with zero time step as follows

su.SimpleUpdate(tensor_network=tensornet, 
                dts=[0], 
                j_ij=j_ij, 
                h_k=0, 
                s_i=s, 
                s_j=s, 
                s_k=s_k, 
                d_max=d_max, 
                max_iterations=1000, 
                convergence_error=1e-6, 
                log_energy=False,
                print_process=False)

then the algorithm will run 1000 iteration or until the averaged L2 norm between consecutive weight vectors will be smaller then 1e-6.

There are more fully-written examples in the notebooks folder.

List of Notebooks

# file Subject Colab Nbviewer
1 ipeps_energy_simulations.ipynb Computing ground-state energies of iPEPS Tensor Networks Open In Collab nbviewer
2 Quantum_Ising_Model_Phase_Transition.ipynb Simulating the phase transition of the Quantum Transverse Field Ising model Open In Collab nbviewer
3 Triangular_2d_lattice_BLBQ_Spin_1_simulation.ipynb Spin-1 BLBQ tringular 2D lattice phase transition Open In Collab nbviewer

Simulations

Spin-1/2 Antiferromagnetic Heisenberg (AFH) model

Below are some result of ground-state energy per-site simulated with the Simple Update algorithm over AFH Chain, Star, PEPS and Cube Tensor Networks. The AFH Hamiltonian can be written as

In the case of the Star tensor network lattice the AFH Hamiltonian is composite of two parts which corresponds to different type of edges (see paper in the link above). The Chain, Star, PEPS and Cube infinite Tensor Networks are illustrated in the next figure.

Here are the ground-state energy per-site vs inverse virtual bond-dimension simulations for the Tensor Networks diagrams above

Quantum Ising Model on a 2D Spin-1/2 Lattice

Next is the quantum Ising model simulated on a 2D lattice with a transverse field. Its Hamiltonian is given by

In the plots below one can see the simulated x, z magnetization (per-site) along with the simulated energy (per-site). We see that the SU is able to extract the phase transition of the model around h=3.2.

Spin-1 Simulation of a Bilinear-Biquadratic Heisenberg model on a star 2D lattice

The BLBQ Hamiltonian is given by the next equation

notice that for 0 radian angle, this model coincides with the original AFH model. The energy, magnetization and Q-norm as a function of the angle for different bond dimension are plotted below. We can see that the simple-update algorithm is having a hard time to trace all the phase transitions of this model. Although, we notice that for larger bond dimensions it seems like it captures the general behavior of the model. For a comprehensive explanation and results (for triangular lattice see Ref [2])

References

  • [1] Saeed S. Jahromi, and Roman Orus - "A universal tensor network algorithm for any infinite lattice" (2019)
  • [2] Ido Niesen, Philippe Corboz - "A ground state study of the spin-1 bilinear-biquadratic Heisenberg model on the triangular lattice using tensor networks" (2018)
  • [3] Roy Alkabetz and Itai Arad - "Tensor networks contraction and the belief propagation algorithm" (2020)
  • [4] Roy Elkabetz - "Using the Belief Propagation algorithm for finding Tensor Networks approximations of many-body ground states" (2020)

Contact

Roy Elkabetz - elkabetzroy@gmail.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tnsu-0.0.1.tar.gz (26.9 kB view hashes)

Uploaded Source

Built Distribution

tnsu-0.0.1-py3-none-any.whl (24.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page