Skip to main content

Lzip (.lz) archives compression and decompression with buffers and URLs support

Project description

lzip is a Python wrapper for lzlib (https://www.nongnu.org/lzip/lzlib.html) to encode and decode Lzip archives (https://www.nongnu.org/lzip/).

This package is compatible with arbitrary byte sequences but provides features to facilitate interoperability with Numpy's frombuffer and tobytes functions. Decoding and encoding can be performed in chunks, enabling the decompression, processing and compression of files that do not fit in RAM. URLs can be used as well to download, decompress and process the chunks of a remote Lzip archive in one go.

pip3 install lzip

Quickstart

Compress

Compress an in-memory buffer and write it to a file:

import lzip

lzip.compress_to_file("/path/to/output.lz", b"data to compress")

Compress multiple chunks and write the result to a single file (useful to avoid large in-memory buffers):

import lzip

with lzip.FileEncoder("/path/to/output.lz") as encoder:
    encoder.compress(b"data")
    encoder.compress(b" to")
    encoder.compress(b" compress")

Use FileEncoder without context management (with):

import lzip

encoder = lzip.FileEncoder("/path/to/output.lz")
encoder.compress(b"data")
encoder.compress(b" to")
encoder.compress(b" compress")
encoder.close()

Compress a Numpy array and write the result to a file:

import lzip
import numpy

values = numpy.arange(100, dtype="<u4")

lzip.compress_to_file("/path/to/output.lz", values.tobytes())

lzip can use different compression levels. See the documentation below for details.

Decompress

Read and decompress a file to an in-memory buffer:

import lzip

buffer = lzip.decompress_file("/path/to/input.lz")

Read and decompress a file one chunk at a time (useful for large files):

import lzip

for chunk in lzip.decompress_file_iter("/path/to/input.lz"):
    # chunk is a bytes object

Read and decompress a file one chunk at a time, and ensure that each chunk contains a number of bytes that is a multiple of word_size (useful to parse numpy arrays with a known dtype):

import lzip
import numpy

for chunk in lzip.decompress_file_iter("/path/to/input.lz", word_size=4):
    values = numpy.frombuffer(chunk, dtype="<u4")

Download and decompress data from a URL:

import lzip

# option 1: store the whole decompressed file in a single buffer
buffer = lzip.decompress_url("http://download.savannah.gnu.org/releases/lzip/lzip-1.22.tar.lz")

# option 2: iterate over the decompressed file in small chunks
for chunk in lzip.decompress_url_iter("http://download.savannah.gnu.org/releases/lzip/lzip-1.22.tar.lz"):
    # chunk is a bytes object

lzip can also decompress data from an in-memory buffer. See the documentation below for details.

Documentation

The present package contains two libraries. lzip deals with high-level operations (open and close files, download remote data, change default arguments...) whereas lzip_extension focuses on efficiently compressing and decompressing in-memory byte buffers.

lzip uses lzip_extension internally. The latter should only be used in advanced scenarios where fine buffer control is required.

lzip

FileEncoder

class FileEncoder:
    def __init__(self, path, level=6, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and write the compressed bytes to a file
        - path is the output file name, it must be a path-like object such as a string or a pathlib path
        - level must be either an integer in [0, 9] or a tuple (directory_size, match_length)
          0 is the fastest compression level, 9 is the slowest
          see https://www.nongnu.org/lzip/manual/lzip_manual.html for the mapping between
          integer levels, directory sizes and match lengths
        - member_size can be used to change the compressed file's maximum member size
          see the Lzip manual for details on the tradeoffs incurred by this value
        """

    def compress(self, buffer):
        """
        Encode a buffer and write the compressed bytes into the file
        - buffer must be a byte-like object, such as bytes or a bytearray
        """

    def close(self):
        """
        Flush the encoder contents and close the file

        compress must not be called after calling close
        Failing to call close results in a corrupted encoded file
        """

FileEncoder can be used as a context manager (with FileEncoder(...) as encoder). close is called automatically in this case.

BufferEncoder

class BufferEncoder:
    def __init__(self, level=6, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and return the compressed bytes as in-memory buffers
        - level: see FileEncoder
        - member_size: see FileEncoder
        """

    def compress(self, buffer):
        """
        Encode a buffer and return the compressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (non-compressed) buffers and
        output (conpressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b"") even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a bytes object

        compress must not be called after calling finish
        Failing to call finish results in corrupted encoded buffers
        """

RemainingBytesError

class RemainingBytesError(Exception):
    def __init__(self, word_size, buffer):
        """
        Raised by decompress_* functions if the total number of bytes is not a multiple of word_size
        The remaining bytes are stored in self.buffer
        See "Word size and remaining bytes" for details
        """

compress_to_buffer

def compress_to_buffer(buffer, level=6, member_size=(1 << 51)):
    """
    Encode a single buffer and return the compressed bytes as an in-memory buffer
    - buffer must be a byte-like object, such as bytes or a bytearray
    - level: see FileEncoder
    - member_size: see FileEncoder
    This function returns a bytes object
    """

compress_to_file

def compress_to_file(path, buffer, level=6, member_size=(1 << 51)):
    """
    Encode a single buffer and write the compressed bytes into a file
    - path is the output file name, it must be a path-like object such as a string or a pathlib path
    - buffer must be a byte-like object, such as bytes or a bytearray
    - level: see FileEncoder
    - member_size: see FileEncoder
    """

decompress_buffer

def decompress_buffer(buffer, word_size=1):
    """
    Decode a single buffer and return the decompressed bytes as an in-memory buffer
    - buffer must be a byte-like object, such as bytes or a bytearray
    - word_size: see "Word size and remaining bytes"
    This function returns a bytes object
    """

decompress_buffer_iter

def decompress_buffer_iter(buffer, word_size=1):
    """
    Decode a single buffer and return an in-memory buffer iterator
    - buffer must be a byte-like object, such as bytes or a bytearray
    - word_size: see "Word size and remaining bytes"
    This function returns a bytes object iterator
    """

decompress_file

def decompress_file(path, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file and return the decompressed bytes as an in-memory buffer
    - path is the input file name, it must be a path-like object such as a string or a pathlib path
    - word_size: see "Word size and remaining bytes"
    - chunk_size: the number of bytes to read from the file at once
      large values increase memory usage but very small values impede performance
    This function returns a bytes object
    """

decompress_file_iter

def decompress_file_iter(path, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file and return an in-memory buffer iterator
    - path is the input file name, it must be a path-like object such as a string or a pathlib path
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

decompress_file_like

def decompress_file_like(file_like, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file-like object and return the decompressed bytes as an in-memory buffer
    - file_like is a file-like object, such as a file or a HTTP response
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object
    """

decompress_file_like_iter

def decompress_file_like_iter(file_like, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file-like object and return an in-memory buffer iterator
    - file_like is a file-like object, such as a file or a HTTP response
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

decompress_url

def decompress_url(
    url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, cafile=None, capath=None, context=None,
    word_size=1,
    chunk_size=(1 << 16)):
    """
    Download and decode data from a URL and return the decompressed bytes as an in-memory buffer
    - url must be a string or a urllib.Request object
    - data, timeout, cafile, capath and context are passed to urllib.request.urlopen
      see https://docs.python.org/3/library/urllib.request.html for details
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object
    """

decompress_url_iter

def decompress_url_iter(
    url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, cafile=None, capath=None, context=None,
    word_size=1,
    chunk_size=(1 << 16)):
    """
    Download and decode data from a URL and return an in-memory buffer iterator
    - url must be a string or a urllib.Request object
    - data, timeout, cafile, capath and context are passed to urllib.request.urlopen
      see https://docs.python.org/3/library/urllib.request.html for details
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

lzip_extension

Even though lzip_extension behaves like a conventional Python module, it is written in C++. To keep the implementation simple, only positional arguments are supported (keyword arguments do not work). The Python classes documented below are equivalent to the classes exported by this low-level implementation.

You can use lzip_extension by importing it like any other module. lzip.py uses it extensively.

Decoder

class Decoder:
    def __init__(self, word_size=1):
        """
        Decode sequential byte buffers and return the decompressed bytes as in-memory buffers
        - word_size is a non-zero positive integer
          all the output buffers contain a number of bytes that is a multiple of word_size
        """

    def decompress(self, buffer):
        """
        Decode a buffer and return the decompressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (compressed) buffers and
        output (decompressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b"") even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a tuple (buffer, remaining_bytes)
          Both buffer and remaining_bytes and bytes objects
          buffer should be empty (b"") unless the file was truncated
          remaining_bytes is empty (b"") unless the total number of bytes decoded
          is not a multiple of word_size

        decompress must not be called after calling finish
        Failing to call finish delays garbage collection which can be an issue
        when decoding many files in a row, and prevents the algorithm from detecting
        remaining bytes (if the size is not a multiple of word_size)
        """

Encoder

class Encoder:
    def __init__(self, dictionary_size=(1 << 23), match_len_limit=36, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and return the compressed bytes as in-memory buffers
        - dictionary_size is an integer in the range [(1 << 12), (1 << 29)]
        - match_len_limit is an integer in the range [5, 273]
        - member_size is an integer in the range [(1 << 12), (1 << 51)]
        """

    def compress(self, buffer):
        """
        Encode a buffer and return the compressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (decompressed) buffers and
        output (compressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b"") even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a bytes object

        compress must not be called after calling finish
        Failing to call finish results in corrupted encoded buffers
        """

Compare options

The script compare_options.py uses the lzip library to compare the compression ratio of different pairs (dictionary_size, match_len_limit). It runs multiple compressions in parallel and does not store the compressed bytes. About 3 GB of RAM are required to run the script. Processing time depends on the file size and the number of processors on the machine.

The script requires matplotlib (pip3 install matplotlib) to display the results.

python3 compare_options /path/to/uncompressed/file [--chunk-size=65536]

Word size and remaining bytes

Decoding functions take an optional parameter word_size that defaults to 1. Decoded buffers are guaranteed to contain a number of bytes that is a multiple of word_size to facilitate fixed-sized words parsing (for example numpy.frombytes). If the total size of the uncompressed archive is not a multiple of word_size, lzip.RemainingBytesError is raised after iterating over the last chunk. The raised exception provides access to the remaining bytes.

Non-iter decoding functions do not provide access to the decoded buffers if the total size is not a multiple of word_size (only the remaining bytes).

The following example decodes a file and converts the decoded bytes to 4-bytes unsigned integers:

import lzip
import numpy

try:
    for chunk in lzip.decompress_file_iter("/path/to/archive.lz", 4):
        values = numpy.frombuffer(chunk, dtype="<u4")
except lzip.RemainingBytesError as error:
    # this block is executed only if the number of bytes in "/path/to/archive.lz"
    # is not a multiple of 4 (after decompression)
    print(error) # prints "The total number of bytes is not a multiple of 4 (k remaining)"
                 # where k is in [1, 3]
    # error.buffer is a bytes object and contains the k remaining bytes

Default parameters

The default parameters in lzip functions are not constants, despite what is presented in the documentation. The actual implementation looks like this:

def some_function(some_parameter=None):
    if some_parameter is None:
        some_paramter = some_paramter_default_value

This approach makes it possible to change default values at the module level at any time. For example:

import lzip

lzip.compress_to_file("/path/to/output0.lz", b"data to compress") # encoded at level 6 (default)

lzip.default_level = 9

lzip.compress_to_file("/path/to/output1.lz", b"data to compress") # encoded at level 9
lzip.compress_to_file("/path/to/output2.lz", b"data to compress") # encoded at level 9

lzip_default_level = 0

lzip.compress_to_file("/path/to/output1.lz", b"data to compress") # encoded at level 0

lzip exports the following default default values:

default_level = 6
default_word_size = 1
default_chunk_size = 1 << 16
default_member_size = 1 << 51

Publish

  1. Bump the version number in version.py.

  2. Install Cubuzoa in a different directory (https://github.com/neuromorphicsystems/cubuzoa) to build pre-compiled versions for all major operating systems. Cubuzoa depends on VirtualBox (with its extension pack) and requires about 75 GB of free disk space.

cd cubuzoa
python3 cubuzoa.py provision
python3 -m cubuzoa build /path/to/lzip --post /path/to/lzip/test.py
  1. Install twine
pip3 install twine
  1. Upload the compiled wheels and the source code to PyPI:
python3 setup.py sdist --dist-dir wheels
python3 -m twine upload wheels/*

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lzip-1.1.0.tar.gz (102.3 kB view details)

Uploaded Source

Built Distributions

lzip-1.1.0-cp39-cp39-win_amd64.whl (54.8 kB view details)

Uploaded CPython 3.9 Windows x86-64

lzip-1.1.0-cp39-cp39-win32.whl (47.4 kB view details)

Uploaded CPython 3.9 Windows x86

lzip-1.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (87.0 kB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

lzip-1.1.0-cp39-cp39-macosx_11_0_x86_64.whl (59.0 kB view details)

Uploaded CPython 3.9 macOS 11.0+ x86-64

lzip-1.1.0-cp38-cp38-win_amd64.whl (54.8 kB view details)

Uploaded CPython 3.8 Windows x86-64

lzip-1.1.0-cp38-cp38-win32.whl (47.4 kB view details)

Uploaded CPython 3.8 Windows x86

lzip-1.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (87.1 kB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

lzip-1.1.0-cp38-cp38-macosx_11_0_x86_64.whl (59.0 kB view details)

Uploaded CPython 3.8 macOS 11.0+ x86-64

lzip-1.1.0-cp37-cp37m-win_amd64.whl (54.8 kB view details)

Uploaded CPython 3.7m Windows x86-64

lzip-1.1.0-cp37-cp37m-win32.whl (47.4 kB view details)

Uploaded CPython 3.7m Windows x86

lzip-1.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (87.0 kB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

lzip-1.1.0-cp37-cp37m-macosx_11_0_x86_64.whl (59.0 kB view details)

Uploaded CPython 3.7m macOS 11.0+ x86-64

File details

Details for the file lzip-1.1.0.tar.gz.

File metadata

  • Download URL: lzip-1.1.0.tar.gz
  • Upload date:
  • Size: 102.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.0.tar.gz
Algorithm Hash digest
SHA256 f4444b7b3dd6f33cb2b553c424054eae355bd4bd324d18c470a5320d9961ac5d
MD5 c15626439001ca20d08eb6cf20f4f357
BLAKE2b-256 4146527da7ee2db390713b8a8c78b332fccad588be255be19fc9c90f2b972b17

See more details on using hashes here.

File details

Details for the file lzip-1.1.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: lzip-1.1.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 54.8 kB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 cfa507eecef6dc16ef15405952247b9b0e5d766ba1cb2d2403ccff3ee5d5dfc1
MD5 82091de7c6408f1deabd509f2f83841b
BLAKE2b-256 4e7f6a153928d399e1b9fc4cb991f1ddbb9b8598c6c49a74231cad68c560b9eb

See more details on using hashes here.

File details

Details for the file lzip-1.1.0-cp39-cp39-win32.whl.

File metadata

  • Download URL: lzip-1.1.0-cp39-cp39-win32.whl
  • Upload date:
  • Size: 47.4 kB
  • Tags: CPython 3.9, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.0-cp39-cp39-win32.whl
Algorithm Hash digest
SHA256 866c4a5d1b29bd58e43fbc625ae01f5ab28f7a2db0083ce437c8028bc520bb18
MD5 13fb6f4a01c8334fd22c73f66ef77338
BLAKE2b-256 ada16a1520b455a1e19a3c2c4eb02d2243f0919e4223c842ccb896935e9f1c61

See more details on using hashes here.

File details

Details for the file lzip-1.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 cc90c2019d4e191d4f033d92df540b4d38cd599f21c16dcab7f648f70920a183
MD5 daf37a183d6c8e0205cdc52d38d7bc2b
BLAKE2b-256 1dab80152e072f207895c9f403fc25d6cf793a5c7ea37d12bee2a42205f182ad

See more details on using hashes here.

File details

Details for the file lzip-1.1.0-cp39-cp39-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.1.0-cp39-cp39-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 f6206b22b78392c3946a01cc6caacff7dff6af08f2fa875fd9f4c45df446b7f2
MD5 e06346fa525f72ca5645d1d499d1f958
BLAKE2b-256 8a867696cd1b192c59a001023ae3305e23b40b6839e4b4c53070e085dc2d0c2a

See more details on using hashes here.

File details

Details for the file lzip-1.1.0-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: lzip-1.1.0-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 54.8 kB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.0-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 cf68cc163290891f5d311822473260bcf3a750729413fb4d3c473a2d1f697628
MD5 58769ea251d6c6d8adb1c89687404f34
BLAKE2b-256 200115318f550271c313221fe4681bde1efab7174ec608a50c2832c82ec47e22

See more details on using hashes here.

File details

Details for the file lzip-1.1.0-cp38-cp38-win32.whl.

File metadata

  • Download URL: lzip-1.1.0-cp38-cp38-win32.whl
  • Upload date:
  • Size: 47.4 kB
  • Tags: CPython 3.8, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.0-cp38-cp38-win32.whl
Algorithm Hash digest
SHA256 c744136fe26de6ad891a594316fcfb51dae8a38d8c318bf0ca03fd3a77438cd4
MD5 1fbd4e665f339dabccd2fcc2b0f0213e
BLAKE2b-256 afdfc7ef05eb446305de19a8029721766343429c74a28e118765d968b7505b25

See more details on using hashes here.

File details

Details for the file lzip-1.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d0535e0358430cf39e5b320004bd20361335d7b5d81114c48cbee94e9932a9ae
MD5 62e9318948d7a15584e9745b0cf0429c
BLAKE2b-256 4900c1f9b936fded15de6f0fc5f99c0e24b838506d2979d70ad3200320907ed1

See more details on using hashes here.

File details

Details for the file lzip-1.1.0-cp38-cp38-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.1.0-cp38-cp38-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 ff3f18b898a47202a9cac88d44455ff1123fd07d2a5032174e3cf75be7ace842
MD5 9e1ab3f4ddebc3e0d3b2b7139c048fd6
BLAKE2b-256 71552db58e8a466d41840a101234a06f65bafa0be7f3fb712747179d54bb48f6

See more details on using hashes here.

File details

Details for the file lzip-1.1.0-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: lzip-1.1.0-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 54.8 kB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.0-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 7de71b9f050cb71a336d597e6f6866fcce6191c0d9c38fc2c8d4a76f72721a83
MD5 e5cc4945012432330e86f3a764a8bce4
BLAKE2b-256 486b40f1d5958997b931c4fe230c9855ac24d320420340610da7fb85c11b5aad

See more details on using hashes here.

File details

Details for the file lzip-1.1.0-cp37-cp37m-win32.whl.

File metadata

  • Download URL: lzip-1.1.0-cp37-cp37m-win32.whl
  • Upload date:
  • Size: 47.4 kB
  • Tags: CPython 3.7m, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.0-cp37-cp37m-win32.whl
Algorithm Hash digest
SHA256 cfee3f1f94bad205cc581bb9135b01c3e75d76add27c58f783ce810eeae17f67
MD5 45bf81cc31e6fc17100e6503dc591c6e
BLAKE2b-256 60d337ec81c7e27ec76c3f72bb59138af6f6e70db4efc03f53aaa5c4b57f0bf9

See more details on using hashes here.

File details

Details for the file lzip-1.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 a39e1f37a5caeca3ba81b47ab9d2f7b1d1daa3053ec4f56d2d793abadd6044e9
MD5 245d774f08442318cd9c9fe50ed056b1
BLAKE2b-256 a0dfc8461190b44d54fb3b4e4abf8caaa027fdc22d9af50b35fe47cb272be0bc

See more details on using hashes here.

File details

Details for the file lzip-1.1.0-cp37-cp37m-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.1.0-cp37-cp37m-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 ff6f6bdf96d8f55e0f016cdbb2d2b97b585c2cb3753f702b6e2cca4fdb89ed47
MD5 0de781d5ea0fcca3a10840544f728c7c
BLAKE2b-256 9fb2428819ec08a5e0ce3a4e83c2781925425db88f4c4acadb8a9b43b251c247

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page