Skip to main content

Software development kit for Pasqal cloud platform.

Project description

PASQAL Cloud

SDK to access Pasqal Cloud Services.

Installation

To install the latest release of the pasqal-cloud (formerly pasqal-sdk), have Python 3.8.0 or higher installed, then use pip:

pip install pasqal-cloud

Basic usage

Authentication

There are several ways to authenticate using the SDK:

from pasqal_cloud import SDK

project_id = "your_project_id"  # Replace this value with your project_id on the PASQAL platform. It can be found on the user-portal, in the 'project' section.
username = "your_username"  # Replace this value with your username or email on the PASQAL platform.
password = "your_password"  # Replace this value with your password on the PASQAL platform.

Method 1: Username + Password

If you know your credentials, you can pass them to the SDK instance on creation:

sdk = SDK(username=username, password=password, project_id=project_id)

Method 2: Username only

If you only want to insert your username, but want a solution to have your password being secret, you can run the SDK without a password. A prompt will then ask for your password:

sdk = SDK(username=username, project_id=project_id)

Method 3: Use a custom token provider

You can define a custom class to provide the token. For example, if you have a token, you can use it to authenticate with our APIs:

class CustomTokenProvider(TokenProvider):
    def get_token(self):
        return "your-token"  # Replace this value with your token


sdk = SDK(token_provider=CustomTokenProvider(), project_id=project_id)

Alternatively, create a custom TokenProvider that inherits from ExpiringTokenProvider. You should define a custom '_query_token' method which fetches your token. See Auth0TokenProvider implementation for an example.

Create a batch of jobs

The package main component is a python object called SDK which can be used to create a Batch.

A Batch is a group of jobs that will run on the same QPU. Sequences can be specified at the batch level, at the job level, or both (jobs with their own sequence override the batch-level one). Once the QPU starts running a batch, only the jobs from that batch will be executed until they all end up in a termination status (DONE, ERROR, CANCELED). The batch sequence can be generated using Pulser. See their documentation, for more information on how to install the library and create your own sequence.

The sequence should be a pulser sequence object. Once it's created, you can serialize like so:

serialized_sequence = sequence.to_abstract_repr()

Batch-level sequence

You can specify a sequence at the batch level that will be used as default for all jobs. If the batch-level sequence defines variables, each job using it must set their values:

job1 = {"runs": 20, "variables": {"omega_max": 6}}
job2 = {"runs": 50, "variables": {"omega_max": 10.5}}

Batches can either be "open" or "closed". An "open" batch will accept new jobs after its creation, unlike a "closed" batch. Open batch may be used to schedule variational algorithm where the next job parameter are derived from the results of the previous jobs, without losing access to the QPU.

You can create a batch of jobs using the create_batch method of the SDK. By default, this will create a closed batch, so all jobs should be passed as arguments right away. You may set the wait argument to True to wait for all the jobs to end up in a termination status before proceeding to the next statement.

# Create a closed batch with 2 jobs and wait for its termination
batch = sdk.create_batch(serialized_sequence, [job1, job2], device_type=DeviceTypeName.FRESNEL, wait=True)

To create an open batch, set the open argument to True, you can then add jobs to your batch. Don't forget to mark your batch as closed when you are done adding new jobs to it.

# Create an open batch with 1 job
batch = sdk.create_batch(serialized_sequence, [job1], open=True, device_type=DeviceTypeName.FRESNEL)
# Add some jobs to it and wait for the jobs to be terminated
job3 = {"runs": 50, "variables": {"omega_max": 10.5}}
batch.add_jobs([job2, job3], wait=True)
# When you have sent all the jobs to your batch, don't forget to mark it as closed
# Otherwise your batch will be timed out by the scheduler
batch.close()

You should choose the device on which to run the batch with the parameter device_type. For using a basic single-threaded QPU emulator that can go up to 10 qubits, you can specify the "EMU_FREE" emulator:

from pasqal_cloud.device import DeviceTypeName

batch = sdk.create_batch(
    serialized_sequence, [job1, job2], device_type=DeviceTypeName.EMU_FREE
)

Once the API has returned the results, you can access them with the following:

for job in batch.ordered_jobs:
    print(f"job-id: {job.id}, status: {job.status}, result: {job.result}")

Job-level sequence

Jobs can define their own serialized_sequence, which overrides the batch-level sequence. This is useful when you want to run different quantum programs in the same batch.

# Create jobs with their own sequences
job1 = {"runs": 20, "serialized_sequence": serialized_sequence1}
job2 = {"runs": 50, "serialized_sequence": serialized_sequence2}

# Create a batch without a batch-level sequence
batch = sdk.create_batch(None, [job1, job2], device_type=DeviceTypeName.FRESNEL, wait=True)

Important: When using job-level sequences, make sure to use non-parametrized sequences. Either use a batch-level sequence with variables, or use job-level sequences without variables. Additionally, when creating a batch without a sequence, all jobs must have their own sequences specified.

Get a list of jobs

It is possible to get all jobs or a selection of jobs with the get_jobs method. This method uses a pagination system that you have to handle. By default, a page returns 100 jobs, but it can be changed.

Here are few examples of how to use it:

from pasqal_cloud import JobFilters, JobStatus, PaginationParams

# Get the first 100 jobs, no filters applied
sdk.get_jobs()

# Get the first 40 jobs, no filters applied
sdk.get_jobs(pagination_params=PaginationParams(limit=40))

# Get the first 100 jobs from a given batch
sdk.get_jobs(filters=JobFilters(batch_id="batch_id"))

# Get the first 100 jobs in error from a specific project
sdk.get_jobs(filters=JobFilters(status=JobStatus.ERROR, project_id="project_id"))

# Get two jobs using two ids
sdk.get_jobs(filters=JobFilters(id=["job_id_1", "job_id_2"]))

# Get the first 20 cancelled jobs created in a given period from a specific list of users
sdk.get_jobs(limit=20,
             filters=JobFilters(status=JobStatus.CANCELED, start_date=datetime(...), end_date=datetime(...),
                                user_id=["user_id_1", "user_id_2"]))

# Get the total number of jobs matching the filters
sdk.get_jobs(pagination_params=PaginationParams(offset=0)).total

# Get the first 300 jobs, no filters applied
jobs = []
jobs.extend(sdk.get_jobs(pagination_params=PaginationParams(offset=0)).results)
jobs.extend(sdk.get_jobs(pagination_params=PaginationParams(offset=100)).results)
jobs.extend(sdk.get_jobs(pagination_params=PaginationParams(offset=200)).results)

Retry a batch of jobs

It is possible to retry a selection of jobs from a CLOSED batch with the rebatch method.

from pasqal_cloud import RebatchFilters
from pasqal_cloud import JobStatus

# Retry all jobs from a given batch
sdk.rebatch(batch_id)

# Retry the first job of a batch
sdk.rebatch(batch_id, RebatchFilters(id=batch.ordered_jobs[0].id))

# Retry all jobs in error
sdk.rebatch(batch_id, RebatchFilters(status=JobStatus.ERROR))

# Retry cancelled jobs created in a given period
sdk.rebatch(batch_id, RebatchFilters(status=JobStatus.CANCELED, start_date=datetime(...), end_date=datetime(...)))

# Retry jobs that have a run number between 5 and 10
sdk.rebatch(batch_id, RebatchFilters(min_runs=5, max_runs=10))

Retry a job in an open batch

It is possible to retry a single job in a same open batch as an original job using batch.retry. The batch must be open in order for this method to work.

batch = sdk.create_batch(..., open=True)

batch.retry(batch.ordered_jobs[0])

# Like for adding a job you can choose to wait for results.
batch.retry(batch.ordered_jobs[0], wait=True)

Create a workload

A workload is a unit of work to be executed on Pasqal Cloud Services infrastructure.

To submit a new workload, select a type, target one of the available backends and provide a configuration object to execute it.

You can create a workload through the SDK with the following command:

workload = sdk.create_workload(workload_type="<WORKLOAD_TYPE>", backend="<BACKEND>", config={"config_param_1": "value"})

You can cancel the workload by doing:

sdk.cancel_workload(workload.id)

Or refresh the workload status/results by with the following:

workload = sdk.get_workload(workload.id)

Once the workload has been processed, you can fetch the result like this:

print(f"workload-id: {workload.id}, status: {workload.status}, result: {workload.result}")

Advanced usage

Extra emulator configuration

Some emulators, such as EMU_TN and EMU_FREE, accept further configuration to control the emulation. This is because these emulators are more advanced numerical simulation of the quantum system.

By default, validation rules are more permissive for jobs targeting an emulator than on the Fresnel QPU when submitting jobs to the cloud platform.

You may however wish to validate that your job running on an emulator is compatible with Fresnel. To that extent, set the strict_validation key in the configuration to True. Defaults to False.

from pasqal_cloud.device import EmulatorType, EmuFreeConfig, EmuTNConfig

configuration = EmuTNConfig(strict_validation=True)
batch = sdk.create_batch(serialized_sequence, [job1, job2], emulator=EmulatorType.EMU_TN, configuration=configuration)

# or

configuration = EmuFreeConfig(strict_validation=True)
batch = sdk.create_batch(serialized_sequence, [job1, job2], emulator=EmulatorType.EMU_FREE, configuration=configuration)

For EMU_TN you may add the integrator timestep in nanoseconds, the numerical accuracy desired in the tensor network compression, and the maximal bond dimension of tensor network state.

from pasqal_cloud.device import EmulatorType, EmuTNConfig

configuration = EmuTNConfig(dt=10.0, precision="normal", max_bond_dim=100)
batch = sdk.create_batch(serialized_sequence, [job1, job2], emulator=EmulatorType.EMU_TN, configuration=configuration)

For EMU_FREE, you may add some default SPAM noise. Beware this makes your job take much longer.

from pasqal_cloud.device import EmulatorType, EmuFreeConfig

configuration = EmuFreeConfig(with_noise=True)
batch = sdk.create_batch(serialized_sequence, [job1, job2], emulator=EmulatorType.EMU_FREE, configuration=configuration)

Replace the corresponding section in the code examples above with this to add further configuration.

List of supported device specifications

The SDK provides a method to retrieve the device specs currently defined on PASQAL's cloud platform. They define the physical constraints of our QPUs, and these constraints enforce some rules on the pulser sequence that can be run on QPUs (e.g., max number of atoms, available pulse channels, ...)

sdk.get_device_specs_dict()

The method returns a dict object mapping a device type to a serialized device specs. These specs can be used to instantiate a Device instance in the Pulser library.

Use the pasqal-cloud SDK behind a proxy

The pasqal-cloud SDK uses the requests library to make HTTP calls to Pasqal's cloud APIs.

Please refer to the requests documentation to use the SDK behind a proxy. Be sure to set the HTTP_PROXY, HTTPS_PROXY, or ALL_PROXY environment variables to route traffic through your proxy. If your proxy uses a self-signed certificate, the certificate authority must be included in your CA bundle. One way to do it is to set the environment variable REQUESTS_CA_BUNDLE to the path of the file containing the certificate authority.

Disable request body compression

By default, the SDK compresses request bodies using gzip for routes that can send heavy payloads (such as batch creation and adding jobs). This reduces bandwidth usage and improves performance.

To disable this compression for debugging or network compatibility reasons, set the PASQAL_SKIP_GZIP_REQUEST_BODY environment variable:

export PASQAL_SKIP_GZIP_REQUEST_BODY=1

Target different API endpoints

This is intended for the package developers or users which were given access to non-prod environments of the PASQAL cloud platform.

To target a specific environment (prod, preprod or dev), instantiate the SDK class using PASQAL_ENDPOINTS['env'] for the parameter endpoints and AUTH0_CONFIG['env'] for auth0 with env being the environment you want to target.

Example:

from pasqal_cloud import AUTH0_CONFIG, SDK, PASQAL_ENDPOINTS

sdk = SDK(..., endpoints=PASQAL_ENDPOINTS['preprod'], auth0=AUTH0_CONFIG['preprod'])

By default, the targeted environment for endpoints and auth0 is prod.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pasqal_cloud-0.22.0.tar.gz (44.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pasqal_cloud-0.22.0-py3-none-any.whl (47.2 kB view details)

Uploaded Python 3

File details

Details for the file pasqal_cloud-0.22.0.tar.gz.

File metadata

  • Download URL: pasqal_cloud-0.22.0.tar.gz
  • Upload date:
  • Size: 44.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pasqal_cloud-0.22.0.tar.gz
Algorithm Hash digest
SHA256 88ff57b17250e1e9a5518ba05eabff324ba59669a9fbbfb5da5920f4023a602f
MD5 7fe3917fad37c2219671e37a4818b79c
BLAKE2b-256 0b6aca1f073fc9f0d938b9351e325fe378e493815362dd0660c29e76060472d9

See more details on using hashes here.

Provenance

The following attestation bundles were made for pasqal_cloud-0.22.0.tar.gz:

Publisher: publish-to-testpypi-and-pypi.yml on pasqal-io/pasqal-cloud

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pasqal_cloud-0.22.0-py3-none-any.whl.

File metadata

  • Download URL: pasqal_cloud-0.22.0-py3-none-any.whl
  • Upload date:
  • Size: 47.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pasqal_cloud-0.22.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d21e1ad58665587219d8159d2f7914db412e018576d7d0dadc63c0f695799472
MD5 5184cf6b15c695a074895a8a2c8700fa
BLAKE2b-256 1754429bd583279c60b1595836173ecadabf959d0becf49725d9139f9e843c8d

See more details on using hashes here.

Provenance

The following attestation bundles were made for pasqal_cloud-0.22.0-py3-none-any.whl:

Publisher: publish-to-testpypi-and-pypi.yml on pasqal-io/pasqal-cloud

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page