Skip to main content

Lzip (.lz) archives compression and decompression with buffers and URLs support

Project description

lzip is a Python wrapper for lzlib (https://www.nongnu.org/lzip/lzlib.html) to encode and decode Lzip archives (https://www.nongnu.org/lzip/).

This package is compatible with arbitrary byte sequences but provides features to facilitate interoperability with Numpy's frombuffer and tobytes functions. Decoding and encoding can be performed in chunks, enabling the decompression, processing and compression of files that do not fit in RAM. URLs can be used as well to download, decompress and process the chunks of a remote Lzip archive in one go.

pip3 install lzip

Quickstart

Compress

Compress an in-memory buffer and write it to a file:

import lzip

lzip.compress_to_file("/path/to/output.lz", b"data to compress")

Compress multiple chunks and write the result to a single file (useful to avoid large in-memory buffers):

import lzip

with lzip.FileEncoder("/path/to/output.lz") as encoder:
    encoder.compress(b"data")
    encoder.compress(b" to")
    encoder.compress(b" compress")

Use FileEncoder without context management (with):

import lzip

encoder = lzip.FileEncoder("/path/to/output.lz")
encoder.compress(b"data")
encoder.compress(b" to")
encoder.compress(b" compress")
encoder.close()

Compress a Numpy array and write the result to a file:

import lzip
import numpy

values = numpy.arange(100, dtype="<u4")

lzip.compress_to_file("/path/to/output.lz", values.tobytes())

lzip can use different compression levels. See the documentation below for details.

Decompress

Read and decompress a file to an in-memory buffer:

import lzip

buffer = lzip.decompress_file("/path/to/input.lz")

Read and decompress a file one chunk at a time (useful for large files):

import lzip

for chunk in lzip.decompress_file_iter("/path/to/input.lz"):
    # chunk is a bytes object

Read and decompress a file one chunk at a time, and ensure that each chunk contains a number of bytes that is a multiple of word_size (useful to parse numpy arrays with a known dtype):

import lzip
import numpy

for chunk in lzip.decompress_file_iter("/path/to/input.lz", word_size=4):
    values = numpy.frombuffer(chunk, dtype="<u4")

Download and decompress data from a URL:

import lzip

# option 1: store the whole decompressed file in a single buffer
buffer = lzip.decompress_url("http://download.savannah.gnu.org/releases/lzip/lzip-1.22.tar.lz")

# option 2: iterate over the decompressed file in small chunks
for chunk in lzip.decompress_url_iter("http://download.savannah.gnu.org/releases/lzip/lzip-1.22.tar.lz"):
    # chunk is a bytes object

lzip can also decompress data from an in-memory buffer. See the documentation below for details.

Documentation

The present package contains two libraries. lzip deals with high-level operations (open and close files, download remote data, change default arguments...) whereas lzip_extension focuses on efficiently compressing and decompressing in-memory byte buffers.

lzip uses lzip_extension internally. The latter should only be used in advanced scenarios where fine buffer control is required.

lzip

FileEncoder

class FileEncoder:
    def __init__(self, path, level=6, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and write the compressed bytes to a file
        - path is the output file name, it must be a path-like object such as a string or a pathlib path
        - level must be either an integer in [0, 9] or a tuple (directory_size, match_length)
          0 is the fastest compression level, 9 is the slowest
          see https://www.nongnu.org/lzip/manual/lzip_manual.html for the mapping between
          integer levels, directory sizes and match lengths
        - member_size can be used to change the compressed file's maximum member size
          see the Lzip manual for details on the tradeoffs incurred by this value
        """

    def compress(self, buffer):
        """
        Encode a buffer and write the compressed bytes into the file
        - buffer must be a byte-like object, such as bytes or a bytearray
        """

    def close(self):
        """
        Flush the encoder contents and close the file

        compress must not be called after calling close
        Failing to call close results in a corrupted encoded file
        """

FileEncoder can be used as a context manager (with FileEncoder(...) as encoder). close is called automatically in this case.

BufferEncoder

class BufferEncoder:
    def __init__(self, level=6, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and return the compressed bytes as in-memory buffers
        - level: see FileEncoder
        - member_size: see FileEncoder
        """

    def compress(self, buffer):
        """
        Encode a buffer and return the compressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (non-compressed) buffers and
        output (conpressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b"") even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a bytes object

        compress must not be called after calling finish
        Failing to call finish results in corrupted encoded buffers
        """

RemainingBytesError

class RemainingBytesError(Exception):
    def __init__(self, word_size, buffer):
        """
        Raised by decompress_* functions if the total number of bytes is not a multiple of word_size
        The remaining bytes are stored in self.buffer
        See "Word size and remaining bytes" for details
        """

compress_to_buffer

def compress_to_buffer(buffer, level=6, member_size=(1 << 51)):
    """
    Encode a single buffer and return the compressed bytes as an in-memory buffer
    - buffer must be a byte-like object, such as bytes or a bytearray
    - level: see FileEncoder
    - member_size: see FileEncoder
    This function returns a bytes object
    """

compress_to_file

def compress_to_file(path, buffer, level=6, member_size=(1 << 51)):
    """
    Encode a single buffer and write the compressed bytes into a file
    - path is the output file name, it must be a path-like object such as a string or a pathlib path
    - buffer must be a byte-like object, such as bytes or a bytearray
    - level: see FileEncoder
    - member_size: see FileEncoder
    """

decompress_buffer

def decompress_buffer(buffer, word_size=1):
    """
    Decode a single buffer and return the decompressed bytes as an in-memory buffer
    - buffer must be a byte-like object, such as bytes or a bytearray
    - word_size: see "Word size and remaining bytes"
    This function returns a bytes object
    """

decompress_buffer_iter

def decompress_buffer_iter(buffer, word_size=1):
    """
    Decode a single buffer and return an in-memory buffer iterator
    - buffer must be a byte-like object, such as bytes or a bytearray
    - word_size: see "Word size and remaining bytes"
    This function returns a bytes object iterator
    """

decompress_file

def decompress_file(path, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file and return the decompressed bytes as an in-memory buffer
    - path is the input file name, it must be a path-like object such as a string or a pathlib path
    - word_size: see "Word size and remaining bytes"
    - chunk_size: the number of bytes to read from the file at once
      large values increase memory usage but very small values impede performance
    This function returns a bytes object
    """

decompress_file_iter

def decompress_file_iter(path, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file and return an in-memory buffer iterator
    - path is the input file name, it must be a path-like object such as a string or a pathlib path
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

decompress_file_like

def decompress_file_like(file_like, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file-like object and return the decompressed bytes as an in-memory buffer
    - file_like is a file-like object, such as a file or a HTTP response
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object
    """

decompress_file_like_iter

def decompress_file_like_iter(file_like, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file-like object and return an in-memory buffer iterator
    - file_like is a file-like object, such as a file or a HTTP response
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

decompress_url

def decompress_url(
    url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, cafile=None, capath=None, context=None,
    word_size=1,
    chunk_size=(1 << 16)):
    """
    Download and decode data from a URL and return the decompressed bytes as an in-memory buffer
    - url must be a string or a urllib.Request object
    - data, timeout, cafile, capath and context are passed to urllib.request.urlopen
      see https://docs.python.org/3/library/urllib.request.html for details
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object
    """

decompress_url_iter

def decompress_url_iter(
    url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, cafile=None, capath=None, context=None,
    word_size=1,
    chunk_size=(1 << 16)):
    """
    Download and decode data from a URL and return an in-memory buffer iterator
    - url must be a string or a urllib.Request object
    - data, timeout, cafile, capath and context are passed to urllib.request.urlopen
      see https://docs.python.org/3/library/urllib.request.html for details
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

lzip_extension

Even though lzip_extension behaves like a conventional Python module, it is written in C++. To keep the implementation simple, only positional arguments are supported (keyword arguments do not work). The Python classes documented below are equivalent to the classes exported by this low-level implementation.

You can use lzip_extension by importing it like any other module. lzip.py uses it extensively.

Decoder

class Decoder:
    def __init__(self, word_size=1):
        """
        Decode sequential byte buffers and return the decompressed bytes as in-memory buffers
        - word_size is a non-zero positive integer
          all the output buffers contain a number of bytes that is a multiple of word_size
        """

    def decompress(self, buffer):
        """
        Decode a buffer and return the decompressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (compressed) buffers and
        output (decompressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b"") even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a tuple (buffer, remaining_bytes)
          Both buffer and remaining_bytes and bytes objects
          buffer should be empty (b"") unless the file was truncated
          remaining_bytes is empty (b"") unless the total number of bytes decoded
          is not a multiple of word_size

        decompress must not be called after calling finish
        Failing to call finish delays garbage collection which can be an issue
        when decoding many files in a row, and prevents the algorithm from detecting
        remaining bytes (if the size is not a multiple of word_size)
        """

Encoder

class Encoder:
    def __init__(self, dictionary_size=(1 << 23), match_len_limit=36, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and return the compressed bytes as in-memory buffers
        - dictionary_size is an integer in the range [(1 << 12), (1 << 29)]
        - match_len_limit is an integer in the range [5, 273]
        - member_size is an integer in the range [(1 << 12), (1 << 51)]
        """

    def compress(self, buffer):
        """
        Encode a buffer and return the compressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (decompressed) buffers and
        output (compressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b"") even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a bytes object

        compress must not be called after calling finish
        Failing to call finish results in corrupted encoded buffers
        """

Compare options

The script compare_options.py uses the lzip library to compare the compression ratio of different pairs (dictionary_size, match_len_limit). It runs multiple compressions in parallel and does not store the compressed bytes. About 3 GB of RAM are required to run the script. Processing time depends on the file size and the number of processors on the machine.

The script requires matplotlib (pip3 install matplotlib) to display the results.

python3 compare_options /path/to/uncompressed/file [--chunk-size=65536]

Word size and remaining bytes

Decoding functions take an optional parameter word_size that defaults to 1. Decoded buffers are guaranteed to contain a number of bytes that is a multiple of word_size to facilitate fixed-sized words parsing (for example numpy.frombytes). If the total size of the uncompressed archive is not a multiple of word_size, lzip.RemainingBytesError is raised after iterating over the last chunk. The raised exception provides access to the remaining bytes.

Non-iter decoding functions do not provide access to the decoded buffers if the total size is not a multiple of word_size (only the remaining bytes).

The following example decodes a file and converts the decoded bytes to 4-bytes unsigned integers:

import lzip
import numpy

try:
    for chunk in lzip.decompress_file_iter("/path/to/archive.lz", 4):
        values = numpy.frombuffer(chunk, dtype="<u4")
except lzip.RemainingBytesError as error:
    # this block is executed only if the number of bytes in "/path/to/archive.lz"
    # is not a multiple of 4 (after decompression)
    print(error) # prints "The total number of bytes is not a multiple of 4 (k remaining)"
                 # where k is in [1, 3]
    # error.buffer is a bytes object and contains the k remaining bytes

Default parameters

The default parameters in lzip functions are not constants, despite what is presented in the documentation. The actual implementation looks like this:

def some_function(some_parameter=None):
    if some_parameter is None:
        some_paramter = some_paramter_default_value

This approach makes it possible to change default values at the module level at any time. For example:

import lzip

lzip.compress_to_file("/path/to/output0.lz", b"data to compress") # encoded at level 6 (default)

lzip.default_level = 9

lzip.compress_to_file("/path/to/output1.lz", b"data to compress") # encoded at level 9
lzip.compress_to_file("/path/to/output2.lz", b"data to compress") # encoded at level 9

lzip_default_level = 0

lzip.compress_to_file("/path/to/output1.lz", b"data to compress") # encoded at level 0

lzip exports the following default default values:

default_level = 6
default_word_size = 1
default_chunk_size = 1 << 16
default_member_size = 1 << 51

Publish

  1. Bump the version number in version.py.

  2. Install Cubuzoa in a different directory (https://github.com/neuromorphicsystems/cubuzoa) to build pre-compiled versions for all major operating systems. Cubuzoa depends on VirtualBox (with its extension pack) and requires about 75 GB of free disk space.

cd cubuzoa
python3 cubuzoa.py provision
python3 -m cubuzoa build /path/to/lzip --post /path/to/lzip/test.py
  1. Install twine
pip3 install twine
  1. Upload the compiled wheels and the source code to PyPI:
python3 setup.py sdist --dist-dir wheels
python3 -m twine upload wheels/*

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lzip-1.1.1.tar.gz (102.3 kB view details)

Uploaded Source

Built Distributions

lzip-1.1.1-cp39-cp39-win_amd64.whl (54.8 kB view details)

Uploaded CPython 3.9 Windows x86-64

lzip-1.1.1-cp39-cp39-win32.whl (47.4 kB view details)

Uploaded CPython 3.9 Windows x86

lzip-1.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (87.1 kB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

lzip-1.1.1-cp39-cp39-macosx_11_0_x86_64.whl (59.0 kB view details)

Uploaded CPython 3.9 macOS 11.0+ x86-64

lzip-1.1.1-cp38-cp38-win_amd64.whl (54.8 kB view details)

Uploaded CPython 3.8 Windows x86-64

lzip-1.1.1-cp38-cp38-win32.whl (47.4 kB view details)

Uploaded CPython 3.8 Windows x86

lzip-1.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (87.1 kB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

lzip-1.1.1-cp38-cp38-macosx_11_0_x86_64.whl (59.0 kB view details)

Uploaded CPython 3.8 macOS 11.0+ x86-64

lzip-1.1.1-cp37-cp37m-win_amd64.whl (54.8 kB view details)

Uploaded CPython 3.7m Windows x86-64

lzip-1.1.1-cp37-cp37m-win32.whl (47.4 kB view details)

Uploaded CPython 3.7m Windows x86

lzip-1.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (87.1 kB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

lzip-1.1.1-cp37-cp37m-macosx_11_0_x86_64.whl (59.0 kB view details)

Uploaded CPython 3.7m macOS 11.0+ x86-64

File details

Details for the file lzip-1.1.1.tar.gz.

File metadata

  • Download URL: lzip-1.1.1.tar.gz
  • Upload date:
  • Size: 102.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.1.tar.gz
Algorithm Hash digest
SHA256 e5b79ba2e66f37193c5b185bbab2e6944a8526f9fae4e5f1dde8e9e5b0109917
MD5 74df7b7b88fbf5b2f7f3c11181701061
BLAKE2b-256 26727e59c5dd261734f30c9bf7aa522722d5c05d58c92d20fbb19c0601b54bce

See more details on using hashes here.

File details

Details for the file lzip-1.1.1-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: lzip-1.1.1-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 54.8 kB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.1-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 d02f77f21bc115c31d0fb6ababd608c8f9b6a16f409235e272129940cc05e15a
MD5 1468efc5a4d0c490b3458476abed37d9
BLAKE2b-256 f8f71eb4e66425501d7a34e268256efdb4a1142e7e92fee250e813685810d964

See more details on using hashes here.

File details

Details for the file lzip-1.1.1-cp39-cp39-win32.whl.

File metadata

  • Download URL: lzip-1.1.1-cp39-cp39-win32.whl
  • Upload date:
  • Size: 47.4 kB
  • Tags: CPython 3.9, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.1-cp39-cp39-win32.whl
Algorithm Hash digest
SHA256 27f66647584249906a2fd1f7e70c5f84a3de8baa4b4bd190a1fb2677c0426572
MD5 b5e7f5178df55f3298324be40a1a6053
BLAKE2b-256 8707c96893ac60d860a8df23239e6b593c8b79f89b328a00045ed5f17b5e4dfa

See more details on using hashes here.

File details

Details for the file lzip-1.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 b9ba5f751a4e9d1e31d594922a2e451a34359de4f6a6852a584993f0ddbb0e2f
MD5 e5de509cc4a3a5fb90f568e8691b939c
BLAKE2b-256 c260a1360cd7ffbe892cc385292b8260274d042a1a0567cb03e118d3b2c2f89e

See more details on using hashes here.

File details

Details for the file lzip-1.1.1-cp39-cp39-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.1.1-cp39-cp39-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 6fe9e0d6eb89f7d69af4b2192a4fa6f4ee9bb708e58f379da8b36b4544787ebb
MD5 7e96d9a51b98ec8a369d4e988655a742
BLAKE2b-256 dc3db3556d470e5df24d498991bf8aa77920ca46df0455a7fcbcd7bcb06be79a

See more details on using hashes here.

File details

Details for the file lzip-1.1.1-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: lzip-1.1.1-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 54.8 kB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.1-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 d07abd91d8d26ccfb18decb65403be5b6ef4fa20e036afb37f64623d946c2795
MD5 ff107ecf28fd81fb773651bc5e267290
BLAKE2b-256 01fbf7b4a9d44bd40dba3e5a96230f89be5cdb4b44fd7656b2504f010b6c784e

See more details on using hashes here.

File details

Details for the file lzip-1.1.1-cp38-cp38-win32.whl.

File metadata

  • Download URL: lzip-1.1.1-cp38-cp38-win32.whl
  • Upload date:
  • Size: 47.4 kB
  • Tags: CPython 3.8, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.1-cp38-cp38-win32.whl
Algorithm Hash digest
SHA256 8536917a51de70410e2cf7fd5b174b59342645c33c58322bfd22174279df299b
MD5 e26be1ee01c23b041f0680e193808a1a
BLAKE2b-256 9e32f2704b056c29dbae203711d96eff94484f55cc6a5697b5bc5b12067e0093

See more details on using hashes here.

File details

Details for the file lzip-1.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 553fac70420bbb0815f7e56ec4115da83b416a1a4c8b77b5a65a4154b83a1879
MD5 dde744028d27703ea95fb1e551e57d52
BLAKE2b-256 c51cc41e6801bd3204dccde5cbd189d96188c0a1079ab74aa9750235526a31f1

See more details on using hashes here.

File details

Details for the file lzip-1.1.1-cp38-cp38-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.1.1-cp38-cp38-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 24073f56782f585ae34ec96841f7c2a67fa0595fab177beb3b7c8916660f50e8
MD5 5a7a17ca0d86030fc97839ed5f0535dc
BLAKE2b-256 19be780ce59bb8dcb1cf54576d50c511fc555563edd5ea846f58d0c11940e0cd

See more details on using hashes here.

File details

Details for the file lzip-1.1.1-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: lzip-1.1.1-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 54.8 kB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.1-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 244e99d9b655a64ba2a4f9b525b35b8316d93b3f0225ccd3d14dd55b0f54bddc
MD5 506cd3add395c6ae197452adbb7b2a0d
BLAKE2b-256 5864da83731f53c3aa73b2c0a8aefdce8a03a2063daac13f0036748e1e0bd523

See more details on using hashes here.

File details

Details for the file lzip-1.1.1-cp37-cp37m-win32.whl.

File metadata

  • Download URL: lzip-1.1.1-cp37-cp37m-win32.whl
  • Upload date:
  • Size: 47.4 kB
  • Tags: CPython 3.7m, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for lzip-1.1.1-cp37-cp37m-win32.whl
Algorithm Hash digest
SHA256 345da348c241a8f4a195b6a99d1ac4d7a9fd349fced9f615beb8e1f1135ca425
MD5 85d79c056f8f664fd031b10cbb093b7c
BLAKE2b-256 0ad4c3c587e108fdbf68243c16ae673841bfa03e714e1f820d21f7474bc4cf8a

See more details on using hashes here.

File details

Details for the file lzip-1.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6b4f5bcad831821b1f460f6e8b14af222f9adbde3c2bae7a16cb531e7fa9426c
MD5 3ac85bf911e2878f7c3d67c4b6a72047
BLAKE2b-256 115b68cc855c3d662ef18471d11971f1c500e074632545f8350a2297607edb2c

See more details on using hashes here.

File details

Details for the file lzip-1.1.1-cp37-cp37m-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.1.1-cp37-cp37m-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 f6efb7883e55d4fbf9de013ada864a57a2b0c676941a233be901c3ac937cb4d0
MD5 78cc80a11ce09d8ec6934dacd20d1aac
BLAKE2b-256 69bc5854cf6ab9be24104cfbfbf4cbd4efb93066646556044ec5f9f1c16ba5f0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page