Skip to main content

A package for decoding quantum error correcting codes using minimum-weight perfect matching.

Project description

PyMatching 2

Continuous Integration codecov docs PyPI version Unitary Fund

PyMatching is a fast Python/C++ library for decoding quantum error correcting (QEC) codes using the Minimum Weight Perfect Matching (MWPM) decoder. Given the syndrome measurements from a quantum error correction circuit, the MWPM decoder finds the most probable set of errors, given the assumption that error mechanisms are independent, as well as graphlike (each error causes either one or two detection events). The MWPM decoder is the most popular decoder for decoding surface codes, and can also be used to decode various other code families, including subsystem codes, honeycomb codes and 2D hyperbolic codes.

Version 2 includes a new implementation of the blossom algorithm which is 100-1000x faster than previous versions of PyMatching. PyMatching can be configured using arbitrary weighted graphs, with or without a boundary, and can be combined with Craig Gidney's Stim library to simulate and decode error correction circuits in the presence of circuit-level noise. The sinter package combines Stim and PyMatching to perform fast, parallelised monte-carlo sampling of quantum error correction circuits.

Documentation for PyMatching can be found at: pymatching.readthedocs.io

Our paper gives more background on the MWPM decoder and our implementation (sparse blossom) released in PyMatching v2.

To see how stim, sinter and pymatching can be used to estimate the threshold of an error correcting code with circuit-level noise, try out the stim getting started notebook.

The new >100x faster implementation for Version 2

Version 2 features a new implementation of the blossom algorithm, which I wrote with Craig Gidney. Our new implementation, which we refer to as the sparse blossom algorithm, can be seen as a generalisation of the blossom algorithm to handle the decoding problem relevant to QEC. We solve the problem of finding minimum-weight paths between detection events in a detector graph directly, which avoids the need to use costly all-to-all Dijkstra searches to find a MWPM in a derived graph using the original blossom algorithm. The new version is also exact - unlike previous versions of PyMatching, no approximation is made. See our paper for more details.

Our new implementation is over 100x faster than previous versions of PyMatching, and is over 100,000x faster than NetworkX (benchmarked with surface code circuits). At 0.1% circuit-noise, PyMatching can decode both X and Z basis measurements of surface code circuits up to distance 17 in under 1 microsecond per round of syndrome extraction on a single core. Furthermore, the runtime is roughly linear in the number of nodes in the graph.

The plot below compares the performance of PyMatching v2 with the previous version (v0.7) as well as with NetworkX for decoding surface code circuits with circuit-level depolarising noise. All decoders were run on a single core of an M1 processor, processing both the X and Z basis measurements. The equations T=N^x in the legend (and plotted as dashed lines) are obtained from a fit to the same dataset for distance > 10, where N is the number of detectors (nodes) per round, and T is the decoding time per round. See the benchmarks folder in the repository for the data and stim circuits, as well as additional benchmarks.

PyMatching new vs old vs NetworkX

Sparse blossom is conceptually similar to the approach described in this paper by Austin Fowler, although our approach differs in many of the details (as explained in our paper). There are even more similarities with the very nice independent work by Yue Wu, who recently released the fusion-blossom library. One of the differences with our approach is that fusion-blossom grows the exploratory regions of alternating trees in a similar way to how clusters are grown in Union-Find, whereas our approach instead progresses along a timeline, and uses a global priority queue to grow alternating trees. Yue also has a paper coming soon, so stay tuned for that as well.

Installation

The latest version of PyMatching can be downloaded and installed from PyPI with the command:

pip install pymatching --upgrade

Usage

PyMatching can load matching graphs from a check matrix, a stim.DetectorErrorModel, a networkx.Graph, a rustworkx.PyGraph or by adding edges individually with pymatching.Matching.add_edge and pymatching.Matching.add_boundary_edge.

Decoding Stim circuits

PyMatching can be combined with Stim. Generally, the easiest and fastest way to do this is using sinter (use v1.10.0 or later), which uses PyMatching and Stim to run parallelised monte carlo simulations of quantum error correction circuits. However, in this section we will use Stim and PyMatching directly, to demonstrate how their Python APIs can be used. To install stim, run pip install stim --upgrade.

First, we generate a stim circuit. Here, we use a surface code circuit included with stim:

import numpy as np
import stim
import pymatching
circuit = stim.Circuit.generated("surface_code:rotated_memory_x", 
                                 distance=5, 
                                 rounds=5, 
                                 after_clifford_depolarization=0.005)

Next, we use stim to generate a stim.DetectorErrorModel (DEM), which is effectively a Tanner graph describing the circuit-level noise model. By setting decompose_errors=True, stim decomposes all error mechanisms into edge-like error mechanisms (which cause either one or two detection events). This ensures that our DEM is graphlike, and can be loaded by pymatching:

model = circuit.detector_error_model(decompose_errors=True)
matching = pymatching.Matching.from_detector_error_model(model)

Next, we will sample 1000 shots from the circuit. Each shot (a row of shots) contains the full syndrome (detector measurements), as well as the logical observable measurements, from simulating the noisy circuit:

sampler = circuit.compile_detector_sampler()
syndrome, actual_observables = sampler.sample(shots=1000, separate_observables=True)

Now we can decode! We compare PyMatching's predictions of the logical observables with the actual observables sampled with stim, in order to count the number of mistakes and estimate the logical error rate:

num_errors = 0
for i in range(syndrome.shape[0]):
    predicted_observables = matching.decode(syndrome[i, :])
    num_errors += not np.array_equal(actual_observables[i, :], predicted_observables)

print(num_errors)  # prints 8

As of PyMatching v2.1.0, you can use matching.decode_batch to decode a batch of shots instead. Since matching.decode_batch iterates over the shots in C++, it's faster than iterating over calls to matching.decode in Python. The following cell is therefore a faster equivalent to the cell above:

predicted_observables = matching.decode_batch(syndrome)
num_errors = np.sum(np.any(predicted_observables != actual_observables, axis=1))

print(num_errors)  # prints 8

Loading from a parity check matrix

We can also load a pymatching.Matching object from a binary parity check matrix, another representation of a Tanner graph. Each row in the parity check matrix H corresponds to a parity check, and each column corresponds to an error mechanism. The element H[i,j] of H is 1 if parity check i is flipped by error mechanism j, and 0 otherwise. To be used by PyMatching, the error mechanisms in H must be graphlike. This means that each column must contain either one or two 1s (if a column has a single 1, it represents a half-edge connected to the boundary).

We can give each edge in the graph a weight, by providing PyMatching with a weights numpy array. Element weights[j] of the weights array sets the edge weight for the edge corresponding to column j of H. If the error mechanisms are treated as independent, then we typically want to set the weight of edge j to the log-likelihood ratio log((1-p_j)/p_j), where p_j is the error probability associated with edge j. With this setting, PyMatching will find the most probable set of error mechanisms, given the syndrome.

With PyMatching configured using H and weights, decoding a binary syndrome vector syndrome (a numpy array of length H.shape[0]) corresponds to finding a set of errors defined in a binary predictions vector satisfying H@predictions % 2 == syndrome while minimising the total solution weight predictions@weights.

In quantum error correction, rather than predicting which exact set of error mechanisms occurred, we typically want to predict the outcome of logical observable measurements, which are the parities of error mechanisms. These can be represented by a binary matrix observables. Similar to the check matrix, observables[i,j] is 1 if logical observable i is flipped by error mechanism j. For example, suppose our syndrome syndrome, was the result of a set of errors noise (a binary array of length H.shape[1]), such that syndrome = H@noise % 2. Our decoding is successful if observables@noise % 2 == observables@predictions % 2.

Putting this together, we can decode a distance 5 repetition code as follows:

import numpy as np
from scipy.sparse import csc_matrix
import pymatching
H = csc_matrix([[1, 1, 0, 0, 0],
                 [0, 1, 1, 0, 0],
                 [0, 0, 1, 1, 0],
                 [0, 0, 0, 1, 1]])
weights = np.array([4, 3, 2, 3, 4])   # Set arbitrary weights for illustration
matching = pymatching.Matching(H, weights=weights)
prediction = matching.decode(np.array([0, 1, 0, 1]))
print(prediction)  # prints: [0 0 1 1 0]
# Optionally, we can return the weight as well:
prediction, solution_weight = matching.decode(np.array([0, 1, 0, 1]), return_weight=True)
print(prediction)  # prints: [0 0 1 1 0]
print(solution_weight)  # prints: 5.0

And in order to estimate the logical error rate for a physical error rate of 10%, we can sample as follows:

import numpy as np
from scipy.sparse import csc_matrix
import pymatching
H = csc_matrix([[1, 1, 0, 0, 0],
                [0, 1, 1, 0, 0],
                [0, 0, 1, 1, 0],
                [0, 0, 0, 1, 1]])
observables = csc_matrix([[1, 0, 0, 0, 0]])
error_probability = 0.1
weights = np.ones(H.shape[1]) * np.log((1-error_probability)/error_probability)
matching = pymatching.Matching.from_check_matrix(H, weights=weights)
num_shots = 1000
num_errors = 0
for i in range(num_shots):
    noise = (np.random.random(H.shape[1]) < error_probability).astype(np.uint8)
    syndrome = H@noise % 2
    prediction = matching.decode(syndrome)
    predicted_observables = observables@prediction % 2
    actual_observables = observables@noise % 2
    num_errors += not np.array_equal(predicted_observables, actual_observables)
print(num_errors)  # prints 4

Note that we can also ask PyMatching to predict the logical observables directly, by supplying them to the faults_matrix argument when constructing the pymatching.Matching object. This allows the decoder to make some additional optimisations, that speed up the decoding procedure a bit. The following example uses this approach, and is equivalent to the example above:

import numpy as np
from scipy.sparse import csc_matrix
import pymatching

H = csc_matrix([[1, 1, 0, 0, 0],
                [0, 1, 1, 0, 0],
                [0, 0, 1, 1, 0],
                [0, 0, 0, 1, 1]])
observables = csc_matrix([[1, 0, 0, 0, 0]])
error_probability = 0.1
weights = np.ones(H.shape[1]) * np.log((1-error_probability)/error_probability)
matching = pymatching.Matching.from_check_matrix(H, weights=weights, faults_matrix=observables)
num_shots = 1000
num_errors = 0
for i in range(num_shots):
    noise = (np.random.random(H.shape[1]) < error_probability).astype(np.uint8)
    syndrome = H@noise % 2
    predicted_observables = matching.decode(syndrome)
    actual_observables = observables@noise % 2
    num_errors += not np.array_equal(predicted_observables, actual_observables)

print(num_errors)  # prints 6

We'll make one more optimisation, which is to use matching.decode_batch to decode the batch of shots, rather than iterating over calls to matching.decode in Python:

import numpy as np
from scipy.sparse import csc_matrix
import pymatching

H = csc_matrix([[1, 1, 0, 0, 0],
                [0, 1, 1, 0, 0],
                [0, 0, 1, 1, 0],
                [0, 0, 0, 1, 1]])
observables = csc_matrix([[1, 0, 0, 0, 0]])
error_probability = 0.1
num_shots = 1000
weights = np.ones(H.shape[1]) * np.log((1-error_probability)/error_probability)
matching = pymatching.Matching.from_check_matrix(H, weights=weights, faults_matrix=observables)
noise = (np.random.random((num_shots, H.shape[1])) < error_probability).astype(np.uint8)
shots = (noise @ H.T) % 2
actual_observables = (noise @ observables.T) % 2
predicted_observables = matching.decode_batch(shots)
num_errors = np.sum(np.any(predicted_observables != actual_observables, axis=1))
print(num_errors)  # prints 6

Instead of using a check matrix, the Matching object can also be constructed using the Matching.add_edge and Matching.add_boundary_edge methods, or by loading from a NetworkX or rustworkx graph.

For more details on how to use PyMatching, see the documentation.

Attribution

When using PyMatching please cite our paper on the sparse blossom algorithm (implemented in version 2):

@article{higgott2023sparse,
  title={Sparse Blossom: correcting a million errors per core second with minimum-weight matching},
  author={Higgott, Oscar and Gidney, Craig},
  journal={arXiv preprint arXiv:2303.15933},
  year={2023}
}

Note: the previous PyMatching paper descibes the implementation in version 0.7 and earlier of PyMatching (not v2).

Acknowledgements

We are grateful to the Google Quantum AI team for supporting the development of PyMatching v2. Earlier versions of PyMatching were supported by Unitary Fund and EPSRC.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pymatching-2.2.1.tar.gz (311.3 kB view details)

Uploaded Source

Built Distributions

PyMatching-2.2.1-cp312-cp312-win_amd64.whl (324.8 kB view details)

Uploaded CPython 3.12 Windows x86-64

PyMatching-2.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (618.4 kB view details)

Uploaded CPython 3.12 manylinux: glibc 2.17+ x86-64

PyMatching-2.2.1-cp312-cp312-macosx_11_0_arm64.whl (370.0 kB view details)

Uploaded CPython 3.12 macOS 11.0+ ARM64

PyMatching-2.2.1-cp312-cp312-macosx_10_15_x86_64.whl (410.0 kB view details)

Uploaded CPython 3.12 macOS 10.15+ x86-64

PyMatching-2.2.1-cp311-cp311-win_amd64.whl (323.3 kB view details)

Uploaded CPython 3.11 Windows x86-64

PyMatching-2.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (619.5 kB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

PyMatching-2.2.1-cp311-cp311-macosx_11_0_arm64.whl (369.1 kB view details)

Uploaded CPython 3.11 macOS 11.0+ ARM64

PyMatching-2.2.1-cp311-cp311-macosx_10_15_x86_64.whl (408.2 kB view details)

Uploaded CPython 3.11 macOS 10.15+ x86-64

PyMatching-2.2.1-cp310-cp310-win_amd64.whl (323.2 kB view details)

Uploaded CPython 3.10 Windows x86-64

PyMatching-2.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (619.4 kB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

PyMatching-2.2.1-cp310-cp310-macosx_11_0_arm64.whl (371.2 kB view details)

Uploaded CPython 3.10 macOS 11.0+ ARM64

PyMatching-2.2.1-cp310-cp310-macosx_10_15_x86_64.whl (408.2 kB view details)

Uploaded CPython 3.10 macOS 10.15+ x86-64

PyMatching-2.2.1-cp39-cp39-win_amd64.whl (322.3 kB view details)

Uploaded CPython 3.9 Windows x86-64

PyMatching-2.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (619.8 kB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

PyMatching-2.2.1-cp39-cp39-macosx_11_0_arm64.whl (369.2 kB view details)

Uploaded CPython 3.9 macOS 11.0+ ARM64

PyMatching-2.2.1-cp39-cp39-macosx_10_15_x86_64.whl (408.3 kB view details)

Uploaded CPython 3.9 macOS 10.15+ x86-64

PyMatching-2.2.1-cp38-cp38-win_amd64.whl (323.3 kB view details)

Uploaded CPython 3.8 Windows x86-64

PyMatching-2.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (619.4 kB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

PyMatching-2.2.1-cp38-cp38-macosx_11_0_arm64.whl (371.2 kB view details)

Uploaded CPython 3.8 macOS 11.0+ ARM64

PyMatching-2.2.1-cp38-cp38-macosx_10_15_x86_64.whl (408.1 kB view details)

Uploaded CPython 3.8 macOS 10.15+ x86-64

PyMatching-2.2.1-cp37-cp37m-win_amd64.whl (323.7 kB view details)

Uploaded CPython 3.7m Windows x86-64

PyMatching-2.2.1-cp37-cp37m-win32.whl (285.8 kB view details)

Uploaded CPython 3.7m Windows x86

PyMatching-2.2.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (621.3 kB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

PyMatching-2.2.1-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl (677.5 kB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ i686

PyMatching-2.2.1-cp37-cp37m-macosx_10_15_x86_64.whl (406.9 kB view details)

Uploaded CPython 3.7m macOS 10.15+ x86-64

File details

Details for the file pymatching-2.2.1.tar.gz.

File metadata

  • Download URL: pymatching-2.2.1.tar.gz
  • Upload date:
  • Size: 311.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for pymatching-2.2.1.tar.gz
Algorithm Hash digest
SHA256 9e5bce6469010c517cebcd22f2d287235f37257664abbd00d9479a7480bbcdbd
MD5 8910deb4b32e4a547f7b2d577c2c13d4
BLAKE2b-256 1a2d5157e672251112cc2ed2d2cc2dd1841eb3edd792c28211a65f84488e6ed0

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 84fe7cc67935ee987302c86f0b5f34de1a4c87a94925e5e407f32c31ed1b5bf5
MD5 5dbcf0f85b9f3b8f61678b314cbc5591
BLAKE2b-256 35746ac549cf2fcd9fae084a5b588fa23746b3773fe47c76ab9cdfd9f7aafeb1

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d41c7721d41c6da218a8d09a6f972e1987300ad782a5b0dcec1e41a616995d5c
MD5 a29499db9012823d02ff29c443a8d25e
BLAKE2b-256 69681c7acfc00815a82ef9d5a0563beb6b114cdc3a18fd61a10825e087309483

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 8dfe4232f99e18f4d600d712d9403f00d497b86a992fa31eafeaba19f8014fbf
MD5 fdc0de260e1a360a3a080b899485a497
BLAKE2b-256 7490121e695cf856e36d5205cd8831577876d9c6f07bca22d7bbcff7a343f1f4

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp312-cp312-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp312-cp312-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 32022881d7254c043ed2bbf6bfbdf39f47815f2310e476752931d186e06c47dc
MD5 fdc04ba4aaab39327e2d5d17f407fdb5
BLAKE2b-256 7e75553131c3e718637a3a0609fce3cee26e911bab0304d64b5a2f0625b47bb7

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 21a92e5f1c5a8f5b9a4c9a8792cf1271f1f11619ed8840c0b1c60f7962a68e5d
MD5 059983b3f87b7f6d69f51b580b537e9e
BLAKE2b-256 3524f1b353c3ed4d5f6c0c90fb7a8ec85560828001e95ff75a359836f7571944

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 9c152a6460693f3160b7b411457f490e6de337f1c7972c181467e08f9b5377f2
MD5 d31a978ae0794e042077bac8a05f84a6
BLAKE2b-256 a6e9e78e84a3298e77b8bcb641e5c66103d1157b257fe3d9cccb83516951237d

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 aecc1ba34b9d98c82faa2ca318a877a47429116e2f33260d640d6e35e2a3acac
MD5 eba1a0ff3300662cb0876c8af32447a1
BLAKE2b-256 31c1a280e50eda1fb0694efb966c5058a13584c16e04e778c9e2f347dd149a18

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp311-cp311-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp311-cp311-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 d941aa41ee0a73b686c681b56e23135d6b58f531766b46e151c7a6e7b3edaad5
MD5 8bf9f09353d4950e294e2d76e9f30485
BLAKE2b-256 f5e87bb25eccd6ccd4b06b5483f20bfe3b05cb1bc22ddbeb768f500549f1f74a

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 a4f23eb858fb69ae2caba2ce5bab9aea699f4d28ad5d1e2462ced745debd1a31
MD5 89c316a5f967847671301255ba293e89
BLAKE2b-256 015bcb5c3cbef00e406eac2a213d5486a1a2bcd54bed13a1432c8f593deb54ac

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d4397d78944e99b3ea6bc80512795079821d434a0e803ea37c3ab9f98c527cb3
MD5 c96c970c2cd5789eed306f2cd2f4031e
BLAKE2b-256 deb0bab920af4ad90ab894ea445033c5542055faafb88f40f64fcb15fdb9d01a

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 c78274f561aa1e71e7dca60be93fba91685d82147265fabe3dcfac33bde05013
MD5 f2d4f3d6fe5d5dc929fe5e8292dfb9f2
BLAKE2b-256 b214109901fac5e1b1fee14c80ac0915ce326823dc05db16393a96cb91ab588b

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp310-cp310-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp310-cp310-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 92980c891c23ca1d597ec359dfae3b609a13ad276f76f2c27b730742719b7e66
MD5 e267b1a07292d3c6d6dd3f1351df9c26
BLAKE2b-256 bbe365704a90cc2f1de496068be03ad57bde87924cae623fa01a814c9fbc7dfe

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: PyMatching-2.2.1-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 322.3 kB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for PyMatching-2.2.1-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 e046912fe1a25e704b2fa78fc2be21037a7d0f6cf4d8b89a831715381ad3c130
MD5 0278d4864fef1bbf9aa007a2b0f43a2c
BLAKE2b-256 fe5c84f9dba23284af732a61e75195ba1be5fec3bdaa719adc159e5719bea3e6

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6c4591f0fb523d874776aa9fa95b4a4a3e0761c19c1b769cd492fd6804c075ab
MD5 24e9bf5b703b549912b908cdc489ca35
BLAKE2b-256 1a891eccc45fc47515ded07fb330b663ad47c2cc2abf2502269b1b11946b2230

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 eba00ff3506a626aa26c8019fa63c3a7c65704d8323af3e0a5bc82271fecace5
MD5 0e023b7fed5a922c9f609d87887d984c
BLAKE2b-256 83ced5b7dcf5ee4d744f13ff082f932e576b4e90ab68c295513de027b53b2345

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp39-cp39-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp39-cp39-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 1229e634c9d4bd9601412754728e05500a5b2b93789a00e05c38ecf25fdc360f
MD5 f83b3c875efc0b699fdb33a1abe7ee52
BLAKE2b-256 8cb8eb310e28581c70d346f3ceea66e7a58057127af517e3a73aad34b0e35267

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: PyMatching-2.2.1-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 323.3 kB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for PyMatching-2.2.1-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 70366893d6f21cb6494a2b20129c3a77fe916f3c836527798f9b82588d1325c0
MD5 5671f10fe0ff89389f4c664f3e36d05a
BLAKE2b-256 a7498c3d0c0ba1780610f016927a4d45a605131a0d52a7325a4db907887d4835

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6bfbd14247a4ab40f30ab1c47ea772422be5fe7b59ef607c8bea1172c76b497d
MD5 882090de8ee860150d0f127973285c0b
BLAKE2b-256 215c4a69be9df6659f4d408187e810a0c2e74fc037d24f46596dcae07e6fc320

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 0a5cf7b5f02adb2fd73dea95723b44a766ea08e603c5eba2e935b5b99b58524e
MD5 6d1c2d0b9645b7e2dd59abdb68e14f55
BLAKE2b-256 6ec6ca710d1bf57c299040dadd077e745aa63bb7305ccd20b0ec1feeaf8cb987

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp38-cp38-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp38-cp38-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 ce80bc59ec9c83974bc51cd658578695b3b41ad34eac6beeb11c7f127d2e8d45
MD5 91f1fe311ac3b2061fe2cba8f67aed1a
BLAKE2b-256 78a141ae83aa4984d8610a133d3d56029203b0d54451924cdeced7ce52170cee

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp37-cp37m-win_amd64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 3836cc4e3bdf9d280b0004e65b3a62729706e30ad4388ca4e843a9f526ab5684
MD5 5a0e2dd0db25d16c319e03ff43afb55e
BLAKE2b-256 87ba83064d7977f930135942b905aeec6bb7b0d4a3c73f27330ff4d5b057ca52

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp37-cp37m-win32.whl.

File metadata

  • Download URL: PyMatching-2.2.1-cp37-cp37m-win32.whl
  • Upload date:
  • Size: 285.8 kB
  • Tags: CPython 3.7m, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for PyMatching-2.2.1-cp37-cp37m-win32.whl
Algorithm Hash digest
SHA256 f3d5a3b90b75cbfab70fd4e9b20d077b440697adfcfb993f7b4270c6f48c3f40
MD5 daefb9517e5541b8c6d091da71f2eadd
BLAKE2b-256 d9f1a2c3ad8f7d0e1db20ed5f2530d0d8ebaec8518c2be99b74673d6bf5a4168

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 2a1a550e859a4a64287253c837e0509529f5be09035edc8fb307a75cf6c68ffd
MD5 128f70b393b46cfbc57875789ebfae1f
BLAKE2b-256 45efe7d03bb881e5fe1d9f85bc62ea603c9293694cb6586e79d9b023e59dd3fa

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl
Algorithm Hash digest
SHA256 69267d619ddc78e3a428f9dd9b83b23d3e1434edb3b1b010188804ad8c921861
MD5 ecbe1fed788b619bcfbe24fd52258c32
BLAKE2b-256 617a66676c4aed67d36069bfa2b3df731f8e15c5b3dd273fa6418b33098d7533

See more details on using hashes here.

Provenance

File details

Details for the file PyMatching-2.2.1-cp37-cp37m-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for PyMatching-2.2.1-cp37-cp37m-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 c27e033cc409807da64926e0fcbd939bd3e96c4d6eaa59daf0d9269f9c32feb9
MD5 2c0b2a671aada7f5bb0d900c8fecb609
BLAKE2b-256 f42dde0658ff9d4e995fac64148c5d0b878a6a24a04123c3c5194870e0d5bbc3

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page