Skip to main content

Automated benchmarking platform for quantum compilers

Project description

Arline Benchmarks

Arline Benchmarks platform allows to benchmark various algorithms for quantum circuit mapping/compression against each other on a list of predefined hardware types and target circuit classes.

Demo (report generation preview)

Benchmarking run

Benchmarking run

LaTeX report

Report

List of supported compilation frameworks

Installation

$ pip3 install arline-benchmarks

Alternatively, Arline Benchmarks can be installed locally in the editable mode. Clone Arline Benchmarks repository, cd to the source directory:

Clone repository, cd to the source directory:

$ git clone https://github.com/ArlineQ/arline_benchmarks.git
$ cd arline_benchmarks

We recommend to install Arline Benchmarks in virtual environment.

$ virtualenv venv
$ source venv/bin/activate

If virtualenv is not installed on your machine, run

$ pip3 install virtualenv

Next in order to install the Arline Benchmarks platform execute:

$ pip3 install .

Alternatively, Arline Benchmarks can be installed in the editable mode:

$ pip3 install -e .

TeXLive installation

Automated generation of LaTeX report is an essential part of Arline Benchmarks. In order to enable full functionality of Arline Benchmarks, you will need to install TeXLive distribution.

Ubuntu or Debian Linux:

To install TeXLive simply run in terminal:

$ sudo apt install texlive-latex-extra

Windows:

On Windows, TeXLive can be installed by downloading source code from official website and following installation instructions.

MacOS:

On MacOS simply install MacTex distribution from the official website.

Alternative solution for Linux/Windows/MacOS:

TeXLive can be also installed as a part of the MikTex package by downloading and installing source code from https://miktex.org. TeXworks frontend is not required and can be ignored.

Getting started

Benchmark example run

In order to run your first benchmarking experiment execute following commands

$ cd arline_benchmarks/configs/compression/
$ bash run_and_plot.sh

Bash script run_and_plot.sh executes

  1. scripts/arline-benchmarks-runner - runs benchmarking experiment and saves result to results/output /gate_chain_report.csv
  2. arline_benhmarks/reports/plot_benchmarks.py - generates plots with metrics based on results/output /gate_chain_report.csv to results/output/figure
  3. scripts/arline-latex-report-generator - generates results/latex/benchmark_report.tex and results/latex/benchmark_report.pdf report files with benchmarking results.

Configuration file configs/compression/config.jsonnet contains full description of benchmarking experiments.

Generate plots with benchmark metrics

To re-draw plots execute (from arline_benchmarks/configs/compression/)

$ bash plot.sh

Generate LaTeX report

To re-generate LaTeX report based on the last benchmarking run (from arline_benchmarks/configs/compression/)

$ arline-latex-report-generator -i results -o results

How to create a custom compilation pipeline?

The key element of Arline Benchmarks is the concept of compilation pipeline. A pipeline is a sequence of compilation stages: [stage1, stage2, stage3, ..].

A typical pipeline consists of the following stages:

  • Generation of a target circuit
  • Mapping of logical qubits to physical qubits
  • Qubit routing for a particular hardware coupling topology
  • Circuit compression by applying circuit identities
  • Rebase to the final hardware gate set

You can easily create a custom compilation pipeline by stacking individual stages (that might correspond to different compiler providers). A pipeline can consist of an unlimited number of compilation stages combined in an arbitrary order. The only exceptions are the first stage target_analysis and the last gateset rebase stage (optional).

Configuration file .jsonnet

Pipelines should be specified in the main configuration file .jsonnet. An example of a configuration file is located in configs/compression/config.jsonnet.

  • Function local pipelines_set(target, hardware, plot_group) defines a list of compilation pipelines to be benchmarked, [pipeline1, pipeline2, ...].

Each pipeline_i = {...} is represented as a dictionary that contains a description of the pipeline and a list of compilation stages.

  • Target circuits generation is defined in .jsonnet functions local random_chain_cliford_t_target(...) and local random_chain_cx_u3_target(...).

  • Benchmarking experiment specifications are defined at the end of the config file in the dictionary with keys {pipelines: ..., plotter: ...}

API documentation

API documentation is here documentation. To generate HTML API documentation, run below command:

$ cd docs/
$ make html

Running tests

To run unit-tests and check installed dependencies:

$ tox

Folder structure

arline_benchmarks
│
├── arline_benchmarks            # platform classes
│   ├── config_parser            # parser of pipeline configuration
│   ├── engine                   # pipeline engine
│   ├── metrics                  # metrics for pipeline comparison
|   ├── pipeline                 # pipeline
│   ├── reports                  # LaTeX report generator
│   ├── strategies               # list of strategies for mapping/compression/rebase
│   └── targets                  # target generator
│
├── circuits                     # qasm circuits dataset
│
├── configs                      # configuration files
│   └── compression              # config .jsonnet file and .sh scripts
│
├── docs                         # documentation
│
├── scripts                      # run files
│
└── test                         # tests
    ├── qasm_files               # .qasm files for test
    └── targets                  # test for targets module

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arline-benchmarks-0.1.8.tar.gz (148.3 kB view details)

Uploaded Source

Built Distribution

arline_benchmarks-0.1.8-py3-none-any.whl (171.2 kB view details)

Uploaded Python 3

File details

Details for the file arline-benchmarks-0.1.8.tar.gz.

File metadata

  • Download URL: arline-benchmarks-0.1.8.tar.gz
  • Upload date:
  • Size: 148.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/47.1.0 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.7.5

File hashes

Hashes for arline-benchmarks-0.1.8.tar.gz
Algorithm Hash digest
SHA256 17725743fa123698236713d5df5d9d4b8603b90147289f902ca3bb31c88fa55a
MD5 9d759e1e2c264ad619052a1f62fd8ba3
BLAKE2b-256 e0ace580d9fd670e8b199a6ce50208919480a61f26666bd8dc31f7e4a8d7897d

See more details on using hashes here.

File details

Details for the file arline_benchmarks-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: arline_benchmarks-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 171.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/47.1.0 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.7.5

File hashes

Hashes for arline_benchmarks-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 ce75e36912e2b5d54faf73427a5b5730ba2a1223abc6c5a217b534d6b01565cd
MD5 5cbafdc135945a387c7b403977241285
BLAKE2b-256 dd0abef4f62f8b3876a07d41dcc422665e4e49172e73ec38a6b4896114d43a1b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page