Skip to main content

MQT Predictor - A MQT tool for Determining Good Quantum Circuit Compilation Options

Project description

License: MIT CodeCov Deploy to PyPI codecov

MQT Predictor: Automatic Prediction of Good Compilation Paths

MQT Predictor is a framework suggesting a compilation options to use for an arbitrary quantum circuit according to the user's needs. To this end, we provide two models for predicting good compilation options and returning the accordingly compiled quantum circuit.

Supervised Machine Learning Model (referred to as "ML")

Here, the problem is treated as a statistical classification task. Furthermore, the resulting methodology does not only provide end-users with a prediction on the best compilation options, but additionally provides insights on why certain decisions have been made—allowing them to learn from the predicted results.

For evaluation of our methodology, seven supervised machine learning classifiers have been used:

  • Random Forest
  • Gradient Boosting
  • Decision Tree
  • Nearest Neighbor
  • Multilayer Perceptron
  • Support Vector Machine
  • Naive Bayes

In our exemplary scenario, the Random Forest classifier achieved the best performance.

This ML model comprises three main functionalities:

  • The pre-trained Random Forest classifier to easily predict compilation options for an unseen quantum circuit in real-time and compile for the respective prediction,
  • all other trained algorithms, and
  • the possibility to adjust and customize the whole training data generation process, e.g., to add training data, compilation options, or adapt the evaluation function.

Reinforcement Learning Model (referred to as "RL")

In this work, we take advantage of decades of classical compiler optimization and propose a reinforcement learning framework for developing optimized quantum circuit compilation flows. Through distinct constraints and a unifying interface, the framework supports the combination of techniques from different compilers and optimization tools in a single compilation flow. The compilation process is modelled as a Markov Decision Process:

In this implementation, compilation passes from both IBM's Qiskit and Quantinuum's TKET are utilized for the RL training of the optimized compiler. We trained one RL model for each of the three optimization criteria of expected fidelity, minimal critical depth, and maximal parallelism.

Usage of MQT Predictor

First, the package must be installed:

(venv) $ pip install mqt.predictor

Now a prediction can be made for any qiskit.QuantumCircuit object or qasm file:

from mqt.predictor import ml, rl

compiled_qc_ML, compilation_info_ML = ml.qcompile("qasm_file_path", model="ML")
compiled_qc_RL, compilation_info_RL = rl.qcompile(
    "qasm_file_path", model="RL", opt_objective="fidelity"
)

In the RL model, the opt_objective options are fidelity, critical_depth, and parallelism.

Examination of all seven trained classifiers of the ML model

To play around with all the examined models, please use the notebooks/ml/evaluation.ipynb Jupyter notebook.

Adjustment of training data generation process

The adjustment of the following parts is possible:

Compilation Path and Compilation Pipelines

Definition of the to be considered compilation options for

  • chosen qubit technologies,
  • their respective devices,
  • the suitable compilers, and
  • their compilation settings.

Evaluation Metric

To make predictions which compilation options are the best ones for a given quantum circuits, a goodness definition is needed. In principle, this evaluation metric can be designed to be arbitrarily complex, e.g., factoring in actual costs of executing quantum circuits on the respective platform or availability limitations for certain devices. However, any suitable evaluation metric should, at least, consider characteristics of the compiled quantum circuit and the respective device. An exemplary metric could be the overall fidelity of a compiled quantum circuit for its targeted device.

Generation of Training Data

To train the model, sufficient training data must be provided as qasm files in the ./training_samples_folder. We provide the training data used for the pre-trained model.

After the adjustment is finished, the following methods need to be called to generate the training data:

from mqt.predictor import ml

predictor = ml.Predictor()
predictor.generate_compiled_circuits()
res = predictor.generate_trainingdata_from_qasm_files()
ml.helper.save_training_data(res)

Now, the Random Forest classifier can be trained:

predictor.train_random_forest_classifier()

Additionally, the raw training data may be extracted and can be used for any machine learning model:

(
    X_train,
    X_test,
    y_train,
    y_test,
    indices_train,
    indices_test,
    names_list,
    scores_list,
) = predictor.get_prepared_training_data(save_non_zero_indices=True)

Repository Structure

.
├── notebooks/
│ ├── ml/
│ │ ├── ...
│ └── rl/
│     └── ...
├── src/
│ ├── mqt/
│   └── predictor/
│     ├── calibration_files/
│     ├── ml/
│     │ └── training_data/
│     │     ├── trained_model
│     │     ├── training_circuits
│     │     ├── training_circuits_compiled
│     │     └── training_data_aggregated
│     └── rl/
│          └── training_data/
│              ├── trained_model
│              └── training_circuits

References

In case you are using MQT Predictor with the ML model in your work, we would be thankful if you referred to it by citing the following publication:

@misc{quetschlich2022mqtpredictor,
  title={Predicting Good Quantum Circuit Compilation Options},
  shorttitle = {{{MQT Predictor}}},
  author={Quetschlich, Nils and Burgholzer, Lukas and Wille, Robert},
  year={2022},
  eprint = {2210.08027},
  eprinttype = {arxiv},
  publisher = {arXiv},
}

In case you are using the RL model in your work, we would be thankful if you referred to it by citing the following publication:

@misc{quetschlich2022compoptimizer,
  title={Compiler Optimization for Quantum Computing Using Reinforcement Learning},
  author={Quetschlich, Nils and Burgholzer, Lukas and Wille, Robert},
  year={2022},
  eprint = {2212.04508},
  eprinttype = {arxiv},
  publisher = {arXiv},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mqt.predictor-1.2.1.tar.gz (20.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mqt.predictor-1.2.1-py3-none-any.whl (19.6 MB view details)

Uploaded Python 3

File details

Details for the file mqt.predictor-1.2.1.tar.gz.

File metadata

  • Download URL: mqt.predictor-1.2.1.tar.gz
  • Upload date:
  • Size: 20.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.11.1

File hashes

Hashes for mqt.predictor-1.2.1.tar.gz
Algorithm Hash digest
SHA256 a91f08dbe67be3ac2b3da8403820eb744df3ac499ef1f0b8ebb3ca820a500440
MD5 f76168fc57b51240f0338851e2145372
BLAKE2b-256 4faa274fd8a70c7a70a798197df2064bfbd10f5df7b15dc8f659bd21d16b8c31

See more details on using hashes here.

File details

Details for the file mqt.predictor-1.2.1-py3-none-any.whl.

File metadata

  • Download URL: mqt.predictor-1.2.1-py3-none-any.whl
  • Upload date:
  • Size: 19.6 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.11.1

File hashes

Hashes for mqt.predictor-1.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6eba8f07c77f1569a67ddfc56be67f166724dbdbf07b7a05b241c39370f86e15
MD5 44e838288b21813ec3d5ec3dd32e2579
BLAKE2b-256 90b45778e3a8617dedefff671b9c12cdb1ff9c5d912b10f7f43035416d2a465e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page