Skip to main content

Approximate a function using a single qubit.

Project description

QubitApproximant

documentation pypi version GitHub release black Tests License: GPL v3

A python package for approximating quantum circuits with a single qubit.

alt text

Documentation and examples

Documentation created with mkdocs can be found in https://pablovegan.github.io/QubitApproximant/.

Installation

With pip:

pip install qubit-approximant

Quick usage

Importing a function

In the submodule benchmarking.functions there are multiple test functions to choose from

import numpy as np
from qubit_approximant.benchmarking.functions import gaussian

x = np.linspace(-2.5, 2.5, 1000)
fn_kwargs = {'mean': 0.0, 'std': 0.5, 'coef': 1}
fn = gaussian(x, **fn_kwargs)

Creating a circuit

To create a circuit just choose the ansaz (CircuitRxRyRz, CircuitRxRy or CircuitRy) and the encoding ('prob' or 'amp').

from qubit_approximant.core import CircuitRxRyRz

circuit = CircuitRxRyRz(x, encoding='prob')

Cost function

To find the optimum parameters of the circuit, we need to choose a cost function. This can be done with the Cost class, where we input the function to approximate, the circuit ansatz and a metric to quantify the error in the approximation (options are 'mse', 'rmse', 'mse_weighted', 'kl_divergence' or 'log_cosh')

from qubit_approximant.core import Cost

cost = Cost(fn, circuit, metric='mse')

Optimizer

Choose an optimizer (BlackBoxOptimizer, GDOptimizer or AdamOptimizer)

from qubit_approximant.core import BlackBoxOptimizer

optimizer = BlackBoxOptimizer(method="L-BFGS-B")

and find the optimum parameters for the chosen circuit

layers = 6
init_params = np.random.default_rng().standard_normal(4 * layers)
opt_params = optimizer(cost, cost.grad, init_params)

Multilayer optimizer

We may also optimize an ansatz for multiple layers using the LayerwiseOptimizer, which uses the optimum parameters for a circuit with $L$ layers as initial parameters for the optimization of a circuit with $L+1$ layers. A list with the optimum parameters for each layer is returned.

from qubit_approximant.core import LayerwiseOptimizer

layerwise_opt = LayerwiseOptimizer(
    optimizer,
    min_layer=3,
    max_layer=7, 
    new_layer_coef=0.3,
    new_layer_position='random'
    )
params_list = layerwise_opt(cost, cost.grad, init_params)

Note: a MultilayerOptimizer which doesn't reuse the optimized parameters from previous layers is also available.

Error metrics

To benchmark the optimization we can use some common metrics, like the $L^1$ norm, $L^2$ norm, $L^\infty$ norm or infidelity $1-F$, to compare the function encoded in the circuit with the desired function. Following our example, fn is a gaussian:

l1_list, l2_list, inf_list, infidelity_list = metric_results(
    params_list,
    circuit,
    fn = gaussian,
    fn_kwargs = {'mean': 0.0, 'std': 0.5, 'coef': 1}
    )

Wrapping up

Test the library yourself!

import numpy as np

from qubit_approximant.benchmarking.functions import gaussian
from qubit_approximant.core import CircuitRxRyRz, Cost, BlackBoxOptimizer, LayerwiseOptimizer
from qubit_approximant.benchmarking import metric_results

x = np.linspace(-2.5, 2.5, 1000)
fn_kwargs = {'mean': 0.0, 'std': 0.5, 'coef': 1}
fn = gaussian(x, **fn_kwargs)

circuit = CircuitRxRyRz(x, encoding='prob')
cost = Cost(fn, circuit, metric='mse')
optimizer = BlackBoxOptimizer(method="L-BFGS-B")

min_layer = 3
init_params = np.random.default_rng().standard_normal(4 * min_layer)
layerwise_opt = LayerwiseOptimizer(
    optimizer,
    min_layer=min_layer,
    max_layer=7,
    new_layer_coef=0.3,
    new_layer_position='random'
    )
params_list = layerwise_opt(cost, cost.grad, init_params)

l1_list, l2_list, inf_list, infidelity_list = metric_results(
    fn=gaussian,
    fn_kwargs={'mean': 0.0, 'std': 0.5, 'coef': 1},
    circuit=circuit,
    params_list=params_list
    )

Bonus: benchmarking multiple initial parameters

The initial paramenters for the optimizer are generated at random with a seed of our choice. We can benchmark the optimizer against multiple seeds (since it is a time consuming task it is parallelized using mpi).

benchmark_seeds(
    num_seeds = 4,
    fn = gaussian,
    fn_kwargs = fn_kwargs,
    circuit = circuit,
    cost = cost,
    optimizer = multilayer_opt,
    filename = "results",
)

References

This library is based on Adrian Pérez Salinas article Data re-uploading for a universal quantum classifier.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

License

This software is under the GNU General Public License v3.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qubit-approximant-1.0.0.tar.gz (320.3 kB view details)

Uploaded Source

Built Distribution

qubit_approximant-1.0.0-py3-none-any.whl (45.3 kB view details)

Uploaded Python 3

File details

Details for the file qubit-approximant-1.0.0.tar.gz.

File metadata

  • Download URL: qubit-approximant-1.0.0.tar.gz
  • Upload date:
  • Size: 320.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.0

File hashes

Hashes for qubit-approximant-1.0.0.tar.gz
Algorithm Hash digest
SHA256 c8daefbaff479f7b6c22e7aaaaf16454ef45cca5613d1beae6ce5d981e88842c
MD5 4f82c49d1b063819c193e8f2a2e8d254
BLAKE2b-256 71911df3c26bef2d4c7cc169c06640675daf51bcc1dd94ab4b87a5e94e5c7608

See more details on using hashes here.

File details

Details for the file qubit_approximant-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for qubit_approximant-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7cbf6d3afc2d42a93bed8ea6676cf0c6cb23dffc9cc3b8f03289b129a3596e9e
MD5 1ff3e763fbe9294f4b5d30a0635afb07
BLAKE2b-256 9c5200d299b6687f9506c6a1998a0c555e71bc642a742ff33ab261389c357c54

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page