Skip to main content

Chinilla vdf verification (wraps C++)

Project description

Chinilla VDF

Build PyPI PyPI - Format GitHub

Total alerts Language grade: Python Language grade: C/C++

Building a wheel

Compiling chinillavdf requires cmake, boost and GMP/MPIR.

python3 -m venv venv
source venv/bin/activate

pip install wheel setuptools_scm pybind11
pip wheel .

The primary build process for this repository is to use GitHub Actions to build binary wheels for MacOS, Linux (x64 and aarch64), and Windows and publish them with a source wheel on PyPi. See .github/workflows/build.yml. CMake uses FetchContent to download pybind11. Building is then managed by cibuildwheel. Further installation is then available via pip install chinillavdf e.g.

Building Timelord and related binaries

In addition to building the required binary and source wheels for Windows, MacOS and Linux, chinillavdf can be used to compile vdf_client and vdf_bench. vdf_client is the core VDF process that completes the Proof of Time submitted to it by the Timelord. The repo also includes a benchmarking tool to get a sense of the iterations per second of a given CPU called vdf_bench. Try ./vdf_bench square_asm 250000 for an ips estimate.

To build vdf_client set the environment variable BUILD_VDF_CLIENT to "Y". export BUILD_VDF_CLIENT=Y.

Similarly, to build vdf_bench set the environment variable BUILD_VDF_BENCH to "Y". export BUILD_VDF_BENCH=Y.

This is currently automated via pip in the install-timelord.sh script in the chinilla-blockchain repository which depends on this repository.

If you're running a timelord, the following tests are available, depending of which type of timelord you are running:

./1weso_test, in case you're running in sanitizer_mode.

./2weso_test, in case you're running a timelord that extends the chain and you're running the slow algorithm.

./prover_test, in case you're running a timelord that extends the chain and you're running the fast algorithm.

Those tests will simulate the vdf_client and verify for correctness the produced proofs.

Contributing and workflow

Contributions are welcome and more details are available in chinilla-blockchain's CONTRIBUTING.md.

The master branch is the currently released latest version on PyPI. Note that at times chinillavdf will be ahead of the release version that chinilla-blockchain requires in it's master/release version in preparation for a new chinilla-blockchain release. Please branch or fork master and then create a pull request to the master branch. Linear merging is enforced on master and merging requires a completed review. PRs will kick off a ci build and analysis of chinillavdf at lgtm.com. Please make sure your build is passing and that it does not increase alerts at lgtm.

Background from prior VDF competitions

Copyright 2018 Ilya Gorodetskov generic@sundersoft.com

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Our VDF construction is described in classgroup.pdf. The implementation details about squaring and proving phrases are described below.

Main VDF Loop

The main VDF loop produces repeated squarings of the generator form (i.e. calculates y(n) = g^(2^n)) as fast as possible, until the program is interrupted. Sundersoft's entry from Chinilla's 2nd VDF contest is used, together with the fast reducer used in Pulmark's entry. This approach is described below:

The NUDUPL algorithm is used. The equations are based on cryptoslava's equations from the 1st contest. They were modified slightly to increase the level of parallelism.

The GCD is a custom implementation with scalar integers. There are two base cases: one uses a lookup table with continued fractions and the other uses the euclidean algorithm with a division table. The division table algorithm is slightly faster even though it has about 2x as many iterations.

After the base case, there is a 128 bit GCD that generates 64 bit cofactor matricies with Lehmer's algorithm. This is required to make the long integer multiplications efficient (Flint's implementation doesn't do this).

The GCD also implements Flint's partial xgcd function, but the output is slightly different. This implementation will always return an A value which is > the threshold and a B value which is <= the threshold. For a normal GCD, the threshold is 0, B is 0, and A is the GCD. Also the interfaces are slightly different.

Scalar integers are used for the GCD. I don't expect any speedup for the SIMD integers that were used in the last implementation since the GCD only uses 64x1024 multiplications, which are too small and have too high of a carry overhead for the SIMD version to be faster. In either case, most of the time seems to be spent in the base case so it shouldn't matter too much.

If SIMD integers are used with AVX-512, doubles have to be used because the multiplier sizes for doubles are significantly larger than for integers. There is an AVX-512 extension to support larger integer multiplications but no processor implements it yet. It should be possible to do a 50 bit multiply-add into a 100 bit accumulator with 4 fused multiply-adds if the accumulators have a special nonzero initial value and the inputs are scaled before the multiplication. This would make AVX-512 about 2.5x faster than scalar code for 1024x1024 integer multiplications (assuming the scalar code is unrolled and uses ADOX/ADCX/MULX properly, and the CPU can execute this at 1 cycle per iteration which it probably can't).

The GCD is parallelized by calculating the cofactors in a separate slave thread. The master thread will calculate the cofactor matricies and send them to the slave thread. Other calculations are also parallelized.

The VDF implementation from the first contest is still used as a fallback and is called about once every 5000 iterations. The GCD will encounter large quotients about this often and these are not implemented. This has a negligible effect on performance. Also, the NUDUPL case where A<=L is not implemented; it will fall back to the old implementation in this case (this never happens outside of the first 20 or so iterations).

There is also corruption detection by calculating C with a non-exact division and making sure the remainder is 0. This detected all injected random corruptions that I tested. No corruptions caused by bugs were observed during testing. This cannot correct for the sign of B being wrong.

GCD continued fraction lookup table

The is implemented in gcd_base_continued_fractions.h and asm_gcd_base_continued_fractions.h. The division table implementation is the same as the previous entry and was discussed there. Currently the division table is only used if AVX2 is enabled but it could be ported to SSE or scalar code easily. Both implementations have about the same performance.

The initial quotient sequence of gcd(a,b) is the same as the initial quotient sequence of gcd(a*2^n/b, 2^n) for any n. This is because the GCD quotients are the same as the continued fraction quotients of a/b, and the initial continued fraction quotients only depend on the initial bits of a/b. This makes it feasible to have a lookup table since it now only has one input.

a*2^n/b is calculated by doing a double precision division of a/b, and then truncating the lower bits. Some of the exponent bits are used in the table in addition to the fraction bits; this makes each slot of the table vary in size depending on what the exponent is. If the result is outside the table bounds, then the division result is floored to fall back to the euclidean algorithm (this is very rare).

The table is calculated by iterating all of the possible continued fractions that have a certain initial quotient sequence. Iteration ends when all of these fractions are either outside the table or they don't fully contain at least one slot of the table. Each slot that is fully contained by such a fraction is updated so that its quotient sequence equals the fraction's initial quotient sequence. Once this is complete, the cofactor matricies are calculated from the quotient sequences. Each cofactor matrix is 4 doubles.

The resulting code seems to have too many instructions so it doesn't perform very well. There might be some way to optimize it. It was written for SSE so that it would run on both processors.

This might work better on an FPGA possibly with low latency DRAM or SRAM (compared to the euclidean algorithm with a division table). There is no limit to the size of the table but doubling the latency would require the number of bits in the table to also be doubled to have the same performance.

Other GCD code

The gcd_128 function calculates a 128 bit GCD using Lehmer's algorithm. It is pretty straightforward and uses only unsigned arithmetic. Each cofactor matrix can only have two possible signs: [+ -; - +] or [- +; + -]. The gcd_unsigned function uses unsigned arithmetic and a jump table to apply the 64-bit cofactor matricies to the A and B values. It uses ADOX/ADCX/MULX if they are available and falls back to ADC/MUL otherwise. It will track the last known size of A to speed up the bit shifts required to get the top 128 bits of A.

No attempt was made to try to do the A and B long integer multiplications on a separate thread; I wouldn't expect any performance improvement from this.

Threads

There is a master thread and a slave thread. The slave thread only exists for each batch of 5000 or so squarings and is then destroyed and recreated for the next batch (this has no measurable overhead). If the original VDF is used as a fallback, the batch ends and the slave thread is destroyed.

Each thread has a 64-bit counter that only it can write to. Also, during a squaring iteration, it will not overwrite any value that it has previously written and transmitted to the other thread. Each squaring is split up into phases. Each thread will update its counter at the start of the phase (the counter can only be increased, not decreased). It can then wait on the other thread's counter to reach a certain value as part of a spin loop. If the spin loop takes too long, an error condition is raised and the batch ends; this should prevent any deadlocks from happening.

No CPU fences or atomics are required since each value can only be written to by one thread and since x86 enforces acquire/release ordering on all memory operations. Compiler memory fences are still required to prevent the compiler from caching or reordering memory operations.

The GCD master thread will increment the counter when a new cofactor matrix has been outputted. The slave thread will spin on this counter and then apply the cofactor matrix to the U or V vector to get a new U or V vector.

It was attempted to use modular arithmetic to calculate k directly but this slowed down the program due to GMP's modulo or integer multiply operations not having enough performance. This also makes the integer multiplications bigger.

The speedup isn't very high since most of the time is spent in the GCD base case and these can't be parallelized.

Generating proofs

The nested wesolowski proofs (n-wesolowski) are used to check the correctness of a VDF result. (Simple) Wesolowski proofs are described in A Survey of Two Verifiable Delay Functions. In order to prove h = g^(2^T), a n-wesolowski proof uses n intermediate simple wesolowski proofs. Given h, g, T, t1, t2, ..., tn, h1, h2, ..., hn, a correct n-wesolowski proof will verify the following:

h1 = g^(2^t1)
h2 = h1^(2^t2)
h3 = h2^(2^t3)
...
hn = h(n-1)^(2^tn)

Additionally, we must have:

t1 + t2 + ... + tn = T
hn = h

The algorithm will generate at most 64-wesolowski proofs. Some intermediates wesolowski proofs are stored in parallel with the main VDF loop. The goal is to have a n-wesolowski proof almost ready as soon as the main VDF loop finishes computing h = g^(2^T), for a T that we're interested in. We'll call a segment a tuple (y, x, T) for which we're interested in a simple wesolowski proof that y = x^(2^T). We'll call a segment finished when we've finished computing its proof.

Segmenets stored

We'll store finished segments of length 2^x for x being multiples of 2 greater than or equal to 16. The current implementation limits the maximum segment size to 2^30, but this can be increased if needed. Let P = 16+2*l. After each 2^P steps calculated by the main VDF loop, we'll store a segment proving that we've correctly done the 2^P steps. Formally, let x be the form after k*2^P steps, y be the form after (k+1)*2^P steps, for each k >= 0, for each P = 16+2*l. Then, we'll store a segment (y, x, 2^P), together with a simple wesolowski proof.

Segment threads

In order to finish a segment of length T=2^P, the number of iterations to run for is T/k + l*2^(k+1) and the intermediate storage required is T/(k*l), for some parameters k and l, as described in the paper. The squarings used to finish a segment are about 2 times as slow as the ones used by the main VDF loop. Even so, finishing a segment is much faster than producing its y value by the main VDF loop. This allows, by the time the main VDF loop finishes 2^16 more steps, to perform work on finishing multiple segments.

The parameters used in finishing segments, for T=2^16, are k=10 and l=1. Above that, parameters are k=12 and l=2^(P-18). Note that, for P >= 18, the intermediate storage needed for a segment is constant (i.e. 2^18/12 forms stored in memory).

Prover class is responsible to finish a segment. It implements pause/resume functionality, so its work can be paused, and later resumed from the point it stopped. For each unfinished segment generated by the main VDF loop, a Prover instance is created, which will eventually finish the segment.

Segment threads are responsible for deciding which Prover instance is currently running. In the current implementation, there are 3 segment threads (however the number is configurable), so at most 3 Prover instances will run at once, at different threads (other Provers will be paused). The segment threads will always pick the segments with the shortest length to run. In case of a tie, the segments received the earliest will have priority. Every time a new segment arrives, or a segment gets finished, some pausing/resuming of Provers is done, if needed. Pausing is done to have at most 3 Provers running at any time, whilst resuming is done if less than 3 Provers are working, but some Provers are paused.

All the segments of lengths 2^16, 2^18 and 2^20 will be finished relatively soon after the main VDF worker produced them, while the segments of length 2^22 and upwards will lag behind the main VDF worker a little. Eventually, all the higher size segments will be finished, the work on them being done repeatedly via pausing (when a smaller size segment arrives) and resuming (when all smaller size segments are finished).

Currently, 4 more segment threads are added after the main VDF loop finishes 500 million iterations (after about 1 hour of running). This is done to be completely sure even the very big sized segments will be finished. This optimisation is only allowed on machines supporting at least 16 concurrent threads.

Generating n-wesolowski proof

Let T an iteration we are interested in. Firstly, the main VDF Loop will need to calculate at least T iterations. Then, in order to get fast a n-wesolowski proof, we'll concatenate finished segments. We want the proof to be as short as possible, so we'll always pick finished segments of the maximum length possible. If such segments aren't finished, we'll choose lower length segments. A segment of length 2^(16 + 2*p) can always be replaced with 4 segments of length 2^(16 + 2*p - 2). The proof will be created shortly after the main VDF loop produced the result, as the 2^16 length segments will always be up to date with the main VDF loop (and, at worst case, we can always concatenate 2^16 length segments, if bigger sizes are not finished yet). It's possible after the concatenation that we'll still need to prove up to 2^16 iterations (no segment is able to cover anything less than 2^16). This last work is done in parallel with the main VDF loop, as an optimisation.

The program limits the proof size to 64-wesolowski. If number of iterations is very large, it's possible the concatenation won't fit into this. In this case, the program will attempt again to prove every minute, until there are enough large segments to fit the 64-wesolowski limit. However, almost in all cases, the concatenation will fit the 64-wesolowski limit in the first try.

Since the maximum segment size is 2^30 and we can use at most 64 segments in a concatenation, the program will prove at most 2^36 iterations. This can be increased if needed.

Intermediates storage

In order to finish segments, some intermediate values need to be stored for each segment. For each different possible segment length, we use a sliding window of length 20 to store those. Hence, for each segment length, we'll store only the intermediates values needed for the last 20 segments produced by the main VDF loop. Since finishing segments is faster than producing them by the main VDF loop, we assume the segment threads won't be behind by more than 20 segments from the main VDF loop, for each segment length. Thanks to the sliding window technique, the memory used will always be constant.

Generally, the main VDF loop performs all the storing, after computing a form we're interested in. However, since storing is very frequent and expensive (GMP operations), this will slow down the main VDF loop.

For the machines having at least 16 concurrent threads, an optimization is provided: the main VDF loop does only repeated squaring, without storing any form. After each 2^15 steps are performed, a new thread starts redoing the work for 2^15 more steps, this time storing the intermediate values as well. All the intermediates threads and the main VDF loop will work in parallel. The only purpose of the main VDF loop becomes now to produce the starting values for the intermediate threads, as fast as possible. The squarings used in the intermediates threads will be 2 times slower than the ones used in the main VDF loop. It's expected the intermediates will only lag behind the main VDF loop by 2^15 iterations, at any point: after 2^16 iterations are done by the main VDF loop, the first thread doing the first 2^15 intermediate values is already finished. Also, at that point, half of the work of the second thread doing the last 2^15 intermediates values should be already done.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chinillavdf-1.0.8.tar.gz (639.3 kB view details)

Uploaded Source

Built Distributions

chinillavdf-1.0.8-cp311-cp311-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.11 Windows x86-64

chinillavdf-1.0.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (474.7 kB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

chinillavdf-1.0.8-cp311-cp311-macosx_10_14_x86_64.whl (336.8 kB view details)

Uploaded CPython 3.11 macOS 10.14+ x86-64

chinillavdf-1.0.8-cp310-cp310-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.10 Windows x86-64

chinillavdf-1.0.8-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (462.2 kB view details)

Uploaded CPython 3.10 manylinux: glibc 2.12+ x86-64

chinillavdf-1.0.8-cp310-cp310-macosx_10_14_x86_64.whl (336.8 kB view details)

Uploaded CPython 3.10 macOS 10.14+ x86-64

chinillavdf-1.0.8-cp39-cp39-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.9 Windows x86-64

chinillavdf-1.0.8-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (462.3 kB view details)

Uploaded CPython 3.9 manylinux: glibc 2.12+ x86-64

chinillavdf-1.0.8-cp39-cp39-macosx_10_14_x86_64.whl (336.9 kB view details)

Uploaded CPython 3.9 macOS 10.14+ x86-64

chinillavdf-1.0.8-cp38-cp38-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.8 Windows x86-64

chinillavdf-1.0.8-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (462.0 kB view details)

Uploaded CPython 3.8 manylinux: glibc 2.12+ x86-64

chinillavdf-1.0.8-cp38-cp38-macosx_10_14_x86_64.whl (336.9 kB view details)

Uploaded CPython 3.8 macOS 10.14+ x86-64

chinillavdf-1.0.8-cp37-cp37m-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.7m Windows x86-64

chinillavdf-1.0.8-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (464.1 kB view details)

Uploaded CPython 3.7m manylinux: glibc 2.12+ x86-64

chinillavdf-1.0.8-cp37-cp37m-macosx_10_14_x86_64.whl (336.8 kB view details)

Uploaded CPython 3.7m macOS 10.14+ x86-64

File details

Details for the file chinillavdf-1.0.8.tar.gz.

File metadata

  • Download URL: chinillavdf-1.0.8.tar.gz
  • Upload date:
  • Size: 639.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.15

File hashes

Hashes for chinillavdf-1.0.8.tar.gz
Algorithm Hash digest
SHA256 d3fda634d97aebc745658a76d43e182d2ffb1c91027fe720ab33bbe5839d3b1a
MD5 485a26cb5695291306a81a2ca3ad5441
BLAKE2b-256 b25691608f38caee92e80cf46422077c6a65226de707ac1d1953c4e175f0f024

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 2729e892562fd3f13c173c9d0fd1ca84ed1e3c2d24ad513e2f0707900b642f63
MD5 3b60c67db79ba0f7624b569408f4b7e8
BLAKE2b-256 5fa663f04713f33706298e29230fa4ea1ad7c1e3649c85b44ee82838275fe2e2

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 08bf800f0399bfd8eec72cb09268a8011442b77ba7948ebffdd90d945831d447
MD5 63dbca10819a86ff3118d253df57c77e
BLAKE2b-256 ad49c7baa87e5a28db799de53add051af69de0bad14f7eaad72eaabddf28789d

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp311-cp311-macosx_10_14_x86_64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp311-cp311-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 632e806a56086561ff9355c70c06f74c849faf62c9c597a71a7f3db77300899b
MD5 fd9d18c92e7a8f1dde28e38a9cf95a67
BLAKE2b-256 90940981205f80e44c632941e450c76fff09f8e91691634e212d6085dc05f592

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 f75c318e2cedb0e157093d2c9e98fa59daff0461055f781bab53ca273cfa5de2
MD5 541e33bdacf4bf97730ea177bede7cda
BLAKE2b-256 051978f97c5f3f69836cc9bde2500ed22eb4f7ba284b70cf2e4421bd473b7f1e

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 d831de8359c064ae2ae8ab4ef96f4449c673a571d63d9d6a5e3a1499363a7b32
MD5 165352f867b2bde8edbae481bdd6040d
BLAKE2b-256 746d900fa3cc0d4d8709240eab476a7841d084d203fd989a4b0882335232da61

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp310-cp310-macosx_10_14_x86_64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp310-cp310-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 72a1fbbc05726833bbd885f21941e7d6665aa3b7c9fe351abca1bbdad96ae1e5
MD5 aaea8e287f69f8fbf404af9219b22fe1
BLAKE2b-256 b52ea1c6495ca7a158f6964dc17ee0cdca181c7662ae33148b1e3d0c9c96bbfb

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 e15107663c0231b16fca0aaa630e33772b3b9690bdedc66284ee9bb93ef7d26e
MD5 e1c8dce162761bc9006ffe0d7d676fe1
BLAKE2b-256 ed500b42cbdef7024eaada76c614c20b07ff3f0b89101b9fd27cdc63e73c11cb

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 b05c42d9981f1cee0d9ea158c20a332d8b9ef474e4c929b38927d6ded4004442
MD5 dee1385d3a697c10a8f4fd7cba86784c
BLAKE2b-256 b666afc4be47d4cea9f531f94a31f4b230e15006a32a0eb8de82b570ea48a9e2

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp39-cp39-macosx_10_14_x86_64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp39-cp39-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 d0de79b2e30f31d75a06a8e5f2e1650eb1a1890c22616cf83f371c47e653ad8b
MD5 6a2b9ce3fe7cf357b6893ac7a9de97df
BLAKE2b-256 5bbc89e50ea3dc54d4a40505d34b0457914ef1a0b7d629c27779108b1974c87e

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp38-cp38-win_amd64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 e963d119d371b1fc22c6772552a2b988c23437184bab74f59a1f38ae802909ef
MD5 f79b028ebd9d8bb823a657813887b987
BLAKE2b-256 095c8d3159e9bc3de8bbcc2c77701a03c57cb78a22e1d6405412b66be84838fa

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 a1fb8008320c23f37bcd48bdd4f0fb9149357b574c125e468e67f876f31adeb3
MD5 f3b17629758142655a65cc8df35b97b3
BLAKE2b-256 1b7637d5536c8abd83585814b50cd472ae854a105d7d58dc603a7325fcb86062

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp38-cp38-macosx_10_14_x86_64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp38-cp38-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 711617db2ab7e26f6402d292c82e87cd91fc36ffeb1829ededa64cedaad4277c
MD5 bf7888d7c0c47423fa3cd3177abfa093
BLAKE2b-256 181f2c2211bc82ffcaa302aa35d771709c11c3cf04093a4537814822af907aa0

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp37-cp37m-win_amd64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 8c30df24aa16a17b97c108406706c4cb5a41d748ca0c8703fb2c6ef22b3333d6
MD5 e6b842a31f91c232475981353a3532ac
BLAKE2b-256 cc88f82b9a62d2940163dd862b2a9f20e54f5e7cbb02c5bae1232b7a5ec56a56

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 9d9c99e8037ed383a511be2fe7e9d108b152601d6d3016f7b61363c994246d99
MD5 e71b32e919c75dd112a3f7ebe14ad2d7
BLAKE2b-256 91bc9c106e72c92960510497f163955b13732c5d67b6f9d2344a50ab212c6339

See more details on using hashes here.

File details

Details for the file chinillavdf-1.0.8-cp37-cp37m-macosx_10_14_x86_64.whl.

File metadata

File hashes

Hashes for chinillavdf-1.0.8-cp37-cp37m-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 14b3321aedd4260b48089aec4d23f41775e1a044e4d88236a4f55b22cb3335bc
MD5 0de1e49c53105c8eb329fdf2e4c7a14a
BLAKE2b-256 f1805e1ce4baae4aadb856b2e7ae9fc56ce60273c56ad22e07cbecff9c968ada

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page