Skip to main content

Automated and reproducible benchmarking framework for quantum computing workflows.

Project description

Gitmoji

MQSS Benchmarking Framework

Overview

MQSS Benchmarking Framework is an automated and reproducable tool for uniting Quantum Computing Benchmarks. It has 4 main pillars:

  • Hardware Benchmarks
  • Software Benchmarks
  • Simulator Benchmarks
  • Algorithmic Benchmarks

🛠️ Installation

This project leverages uv for dependency management and reproducibility, making setup and collaboration straightforward. To get started, ensure you have Python installed. Then, follow these steps to set up your environment using uv:

  1. Install uv (if you don't have it already):

    pip install uv
    
  2. Sync your environment with the project's dependencies:

    uv sync
    

🚀 Usage

MQSS Benchmarking Framework can be used in two ways: through a simple command-line interface or directly as a Python library.

💻 CLI Usage

After installation, the command mqssbench becomes available system-wide.

Listing benchmarks

mqssbench list

This prints every benchmark registered under the origin/source/name structure, including internal, external and user-defined benchmarks.

Running benchmarks

mqssbench run --config path/to/config.yaml

The --config file may contain either:

  1. a single benchmark configuration (dict)
  2. or a list of benchmark configurations (each a full config dict)

When a list is provided, mqssbench runs each benchmark sequentially using the same execution engine.

In development setups, you can run the CLI commands through uv:

uv run mqssbench list
uv run mqssbench run --config path/to/config.yaml

To explore all available commands and options:

mqssbench --help

Verbosity and logging

The CLI supports adjustable logging verbosity for debugging and inspection.

By default, only warnings and errors are shown.

Increase verbosity by repeating the -v / --verbose flag:

  • No -v → WARNING (default)
  • -v → INFO
  • -vv → DEBUG

Examples:

mqssbench -v run --config path/to/config.yaml
mqssbench -vv run --config path/to/config.yaml

Benchmark results are printed to standard output, while logs are sent to standard error.

Configuration file format

A typical benchmark configuration file follows the structure shown below:

# Benchmark configuration template

# Specify the benchmark to run. Two formats are supported:
# 1. String format: "origin/source/name"
# 2. Structured format:
#      origin: <ORIGIN>
#      source: <SOURCE>
#      name: <BENCHMARK_NAME>
benchmark: <ORIGIN>/<SOURCE>/<BENCHMARK_NAME>

# Benchmark specific parameters
benchmark_params:
  <PARAM_1>: <VALUE_1>
  <PARAM_2>: <VALUE_2>
  <PARAM_3>: <VALUE_3>

# Adapter configuration
adapter: <ADAPTER_NAME>
backend: <BACKEND>

# Credentials for the adapter or backend
credentials:
  <CREDENTIAL_KEY>: <CREDENTIAL_VALUE>

# Number of measurement shots
shots: <NUM_SHOTS>

# output directory
output_dir: <PATH>

# Controls what is generated (analysis, visualizations, reports)
report:
  analysis:
    enabled: <true_or_false>
      visualization: <true_or_false>
        enabled: <true_or_false>
        show: <true_or_false>

# Controls how results are persisted
storage:
  enabled: <true_or_false>
  type: <STORAGE_TYPE>  # "file" or "sqlite"
  file:
    format: <FILE_FORMAT> #"json"
  sqlite: # this will be implemeted in future
    db_path: <DATABASE_PATH>

# Profiling controls
profiling:
  enabled: <true_or_false>
  metrics:
    # List of profiling metrics. If omitted, all supported metrics are collected.
    # Available metrics depend on your selected adaptor.
    # For example, for MQSS adaptors, these are valid metrics:
    #   mqp_api, quantum_database, quantum_job_runner, isv_job_runner,
    #   quantum_daemon_job_runner, generator, scheduler, pass_runner,
    #   transpiler, submitter, pass_selection, knitter, job_execution
    - <METRIC_1>
    - <METRIC_2>

Example of a Single benchmark config:

benchmark: core/native/randomized_benchmarking

benchmark_params:
  num_qubits: 2
  lengths: [2, 4, 8, 16]
  num_sequences: 2

adapter: mqss_qiskit
backend: QExa20

credentials:
  mqss_token: ""

shots: 1000

output_dir: "./results"

report:
  analysis:
    enabled: true
    visualization:
      enabled: true
      show: false

storage:
  enabled: true
  type: "file" 
  file:
    format: "json"

profiling:
  enabled: true
  metrics:
    - transpiler
    - submitter

Example of a multi benchmark config:

- benchmark: core/native/randomized_benchmarking
  benchmark_params:
    num_qubits: 2
    lengths: [2, 4, 8]
    num_sequences: 2
  adapter: mqss_qiskit
  backend: QExa20
  credentials:
    mqss_token: ""
  shots: 200
  output_dir: "./results"
  storage:
    enabled: true
    type: "file" 
    file:
      format: "json"

- benchmark: core/native/quantum_volume
  benchmark_params:
    num_qubits: 3
    depth: 3
    trials: 2
  adapter: mqss_qiskit
  backend: QExa20
  credentials:
    mqss_token: ""
  shots: 200
  output_dir: "./results"
  storage:
    enabled: true
    type: "file" 
    file:
      format: "json"

🧩 Python API Usage

MQSS Benchmarking Framework can also be used directly as a Python library when more control is needed.

from mqssbench.runtime import BenchmarkManager

list_of_benchmarks = BenchmarkManager.get_available_benchmarks()

config = {
  "benchmark": "core/native/randomized_benchmarking",

  "benchmark_params": {
    "num_qubits": 2,
    "lengths": [2, 4, 8, 16],
    "num_sequences": 2
  },

  "adapter": "mqss_qiskit",
  "backend": "QExa20",

  "credentials": {
    "mqss_token": ""
  },

  "shots": 1000,

  "output_dir": "./results"

  "report": {
    "analysis": {
      "enabled": True,
      "visualization": {
        "enabled": True
      }
    }
  },

  "storage": {
    "enabled": True,
    "type": "file",
    "file": {
      "format": "json"
    }
  },
  
  "profiling": {
    "enabled": True,
    "metrics": ["transpiler", "submitter"]
  }
}

benchmark_manager = BenchmarkManager(config)
benchmark_manager.dispatch()

The benchmark field must always follow the strict origin/source/name format. For example:

  • Core native benchmark: core/native/quantum_volume - uses native (native provider) as source
  • Core provider benchmark: core/mqt_bench/vqe_su2 - uses mqt_bench (a circuit provider) as source
  • User defined benchmark: user/my_source/my_benchmark_name — uses my_source (arbitrary user source), can also leverage circuit providers

Both CLI and Python API share the same execution engine, registry system, and adapter logic.

Latest supported adapters

  • Qiskit Adapter (config value mqss_qiskit)
  • Pennylane Adapter (config value mqss_pennylane)

A complete list of configuration options will be listed and constantly updated for the upcoming releases

🛠️ Upcoming Features

  • Integration of the Toolchain Project to the Framework for Simulator Benchmarks
  • Improving the benchmark set for all types of benchmarks

📝 Contributing

Feel free to open issues or submit pull requests to improve this project!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mqss_benchmarking_framework-1.0.0.tar.gz (31.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mqss_benchmarking_framework-1.0.0-py3-none-any.whl (47.0 kB view details)

Uploaded Python 3

File details

Details for the file mqss_benchmarking_framework-1.0.0.tar.gz.

File metadata

  • Download URL: mqss_benchmarking_framework-1.0.0.tar.gz
  • Upload date:
  • Size: 31.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for mqss_benchmarking_framework-1.0.0.tar.gz
Algorithm Hash digest
SHA256 03e6d090d51c1e0a3fead99f52443bc5604fa0dfcb6383e88acfd663d3caacb9
MD5 a9971931535243f7029a71a1b577e421
BLAKE2b-256 17d1d1fc66536a70f5214aaf008ace51588f63ecbfbf5f31aeb13b949c32e6c2

See more details on using hashes here.

File details

Details for the file mqss_benchmarking_framework-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: mqss_benchmarking_framework-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 47.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for mqss_benchmarking_framework-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 85464ca4709c739c0af9aec70f77053309069b04f1569097b72720b972bb7e46
MD5 a30e4d8044b88c44e6e9213eb7b97352
BLAKE2b-256 59a7c34eb971d56c6b0f6735041288ee6a639c18e28c0824bfb69a7e716ea865

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page