Skip to main content

Lzip (.lz) archives compression and decompression with buffers and URLs support

Project description

lzip is a Python wrapper for lzlib (https://www.nongnu.org/lzip/lzlib.html) to encode and decode Lzip archives (https://www.nongnu.org/lzip/).

This package is compatible with arbitrary byte sequences but provides features to facilitate interoperability with Numpy's frombuffer and tobytes functions. Decoding and encoding can be performed in chunks, enabling the decompression, processing and compression of files that do not fit in RAM. URLs can be used as well to download, decompress and process the chunks of a remote Lzip archive in one go.

pip3 install lzip

Quickstart

Compress

Compress an in-memory buffer and write it to a file:

import lzip

lzip.compress_to_file("/path/to/output.lz", b"data to compress")

Compress multiple chunks and write the result to a single file (useful to avoid large in-memory buffers):

import lzip

with lzip.FileEncoder("/path/to/output.lz") as encoder:
    encoder.compress(b"data")
    encoder.compress(b" to")
    encoder.compress(b" compress")

Use FileEncoder without context management (with):

import lzip

encoder = lzip.FileEncoder("/path/to/output.lz")
encoder.compress(b"data")
encoder.compress(b" to")
encoder.compress(b" compress")
encoder.close()

Compress a Numpy array and write the result to a file:

import lzip
import numpy

values = numpy.arange(100, dtype="<u4")

lzip.compress_to_file("/path/to/output.lz", values.tobytes())

lzip can use different compression levels. See the documentation below for details.

Decompress

Read and decompress a file to an in-memory buffer:

import lzip

buffer = lzip.decompress_file("/path/to/input.lz")

Read and decompress a file one chunk at a time (useful for large files):

import lzip

for chunk in lzip.decompress_file_iter("/path/to/input.lz"):
    # chunk is a bytes object

Read and decompress a file one chunk at a time, and ensure that each chunk contains a number of bytes that is a multiple of word_size (useful to parse numpy arrays with a known dtype):

import lzip
import numpy

for chunk in lzip.decompress_file_iter("/path/to/input.lz", word_size=4):
    values = numpy.frombuffer(chunk, dtype="<u4")

Download and decompress data from a URL:

import lzip

# option 1: store the whole decompressed file in a single buffer
buffer = lzip.decompress_url("http://download.savannah.gnu.org/releases/lzip/lzip-1.22.tar.lz")

# option 2: iterate over the decompressed file in small chunks
for chunk in lzip.decompress_url_iter("http://download.savannah.gnu.org/releases/lzip/lzip-1.22.tar.lz"):
    # chunk is a bytes object

lzip can also decompress data from an in-memory buffer. See the documentation below for details.

Documentation

The present package contains two libraries. lzip deals with high-level operations (open and close files, download remote data, change default arguments...) whereas lzip_extension focuses on efficiently compressing and decompressing in-memory byte buffers.

lzip uses lzip_extension internally. The latter should only be used in advanced scenarios where fine buffer control is required.

lzip

FileEncoder

class FileEncoder:
    def __init__(self, path, level=6, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and write the compressed bytes to a file
        - path is the output file name, it must be a path-like object such as a string or a pathlib path
        - level must be either an integer in [0, 9] or a tuple (directory_size, match_length)
          0 is the fastest compression level, 9 is the slowest
          see https://www.nongnu.org/lzip/manual/lzip_manual.html for the mapping between
          integer levels, directory sizes and match lengths
        - member_size can be used to change the compressed file's maximum member size
          see the Lzip manual for details on the tradeoffs incurred by this value
        """

    def compress(self, buffer):
        """
        Encode a buffer and write the compressed bytes into the file
        - buffer must be a byte-like object, such as bytes or a bytearray
        """

    def close(self):
        """
        Flush the encoder contents and close the file

        compress must not be called after calling close
        Failing to call close results in a corrupted encoded file
        """

FileEncoder can be used as a context manager (with FileEncoder(...) as encoder). close is called automatically in this case.

BufferEncoder

class BufferEncoder:
    def __init__(self, level=6, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and return the compressed bytes as in-memory buffers
        - level: see FileEncoder
        - member_size: see FileEncoder
        """

    def compress(self, buffer):
        """
        Encode a buffer and return the compressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (non-compressed) buffers and
        output (conpressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b"") even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a bytes object

        compress must not be called after calling finish
        Failing to call finish results in corrupted encoded buffers
        """

RemainingBytesError

class RemainingBytesError(Exception):
    def __init__(self, word_size, buffer):
        """
        Raised by decompress_* functions if the total number of bytes is not a multiple of word_size
        The remaining bytes are stored in self.buffer
        See "Word size and remaining bytes" for details
        """

compress_to_buffer

def compress_to_buffer(buffer, level=6, member_size=(1 << 51)):
    """
    Encode a single buffer and return the compressed bytes as an in-memory buffer
    - buffer must be a byte-like object, such as bytes or a bytearray
    - level: see FileEncoder
    - member_size: see FileEncoder
    This function returns a bytes object
    """

compress_to_file

def compress_to_file(path, buffer, level=6, member_size=(1 << 51)):
    """
    Encode a single buffer and write the compressed bytes into a file
    - path is the output file name, it must be a path-like object such as a string or a pathlib path
    - buffer must be a byte-like object, such as bytes or a bytearray
    - level: see FileEncoder
    - member_size: see FileEncoder
    """

decompress_buffer

def decompress_buffer(buffer, word_size=1):
    """
    Decode a single buffer and return the decompressed bytes as an in-memory buffer
    - buffer must be a byte-like object, such as bytes or a bytearray
    - word_size: see "Word size and remaining bytes"
    This function returns a bytes object
    """

decompress_buffer_iter

def decompress_buffer_iter(buffer, word_size=1):
    """
    Decode a single buffer and return an in-memory buffer iterator
    - buffer must be a byte-like object, such as bytes or a bytearray
    - word_size: see "Word size and remaining bytes"
    This function returns a bytes object iterator
    """

decompress_file

def decompress_file(path, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file and return the decompressed bytes as an in-memory buffer
    - path is the input file name, it must be a path-like object such as a string or a pathlib path
    - word_size: see "Word size and remaining bytes"
    - chunk_size: the number of bytes to read from the file at once
      large values increase memory usage but very small values impede performance
    This function returns a bytes object
    """

decompress_file_iter

def decompress_file_iter(path, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file and return an in-memory buffer iterator
    - path is the input file name, it must be a path-like object such as a string or a pathlib path
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

decompress_file_like

def decompress_file_like(file_like, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file-like object and return the decompressed bytes as an in-memory buffer
    - file_like is a file-like object, such as a file or a HTTP response
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object
    """

decompress_file_like_iter

def decompress_file_like_iter(file_like, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file-like object and return an in-memory buffer iterator
    - file_like is a file-like object, such as a file or a HTTP response
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

decompress_url

def decompress_url(
    url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, cafile=None, capath=None, context=None,
    word_size=1,
    chunk_size=(1 << 16)):
    """
    Download and decode data from a URL and return the decompressed bytes as an in-memory buffer
    - url must be a string or a urllib.Request object
    - data, timeout, cafile, capath and context are passed to urllib.request.urlopen
      see https://docs.python.org/3/library/urllib.request.html for details
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object
    """

decompress_url_iter

def decompress_url_iter(
    url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, cafile=None, capath=None, context=None,
    word_size=1,
    chunk_size=(1 << 16)):
    """
    Download and decode data from a URL and return an in-memory buffer iterator
    - url must be a string or a urllib.Request object
    - data, timeout, cafile, capath and context are passed to urllib.request.urlopen
      see https://docs.python.org/3/library/urllib.request.html for details
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

lzip_extension

Even though lzip_extension behaves like a conventional Python module, it is written in C++. To keep the implementation simple, only positional arguments are supported (keyword arguments do not work). The Python classes documented below are equivalent to the classes exported by this low-level implementation.

You can use lzip_extension by importing it like any other module. lzip.py uses it extensively.

Decoder

class Decoder:
    def __init__(self, word_size=1):
        """
        Decode sequential byte buffers and return the decompressed bytes as in-memory buffers
        - word_size is a non-zero positive integer
          all the output buffers contain a number of bytes that is a multiple of word_size
        """

    def decompress(self, buffer):
        """
        Decode a buffer and return the decompressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (compressed) buffers and
        output (decompressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b"") even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a tuple (buffer, remaining_bytes)
          Both buffer and remaining_bytes and bytes objects
          buffer should be empty (b"") unless the file was truncated
          remaining_bytes is empty (b"") unless the total number of bytes decoded
          is not a multiple of word_size

        decompress must not be called after calling finish
        Failing to call finish delays garbage collection which can be an issue
        when decoding many files in a row, and prevents the algorithm from detecting
        remaining bytes (if the size is not a multiple of word_size)
        """

Encoder

class Encoder:
    def __init__(self, dictionary_size=(1 << 23), match_len_limit=36, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and return the compressed bytes as in-memory buffers
        - dictionary_size is an integer in the range [(1 << 12), (1 << 29)]
        - match_len_limit is an integer in the range [5, 273]
        - member_size is an integer in the range [(1 << 12), (1 << 51)]
        """

    def compress(self, buffer):
        """
        Encode a buffer and return the compressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (decompressed) buffers and
        output (compressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b"") even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a bytes object

        compress must not be called after calling finish
        Failing to call finish results in corrupted encoded buffers
        """

Compare options

The script compare_options.py uses the lzip library to compare the compression ratio of different pairs (dictionary_size, match_len_limit). It runs multiple compressions in parallel and does not store the compressed bytes. About 3 GB of RAM are required to run the script. Processing time depends on the file size and the number of processors on the machine.

The script requires matplotlib (pip3 install matplotlib) to display the results.

python3 compare_options /path/to/uncompressed/file [--chunk-size=65536]

Word size and remaining bytes

Decoding functions take an optional parameter word_size that defaults to 1. Decoded buffers are guaranteed to contain a number of bytes that is a multiple of word_size to facilitate fixed-sized words parsing (for example numpy.frombytes). If the total size of the uncompressed archive is not a multiple of word_size, lzip.RemainingBytesError is raised after iterating over the last chunk. The raised exception provides access to the remaining bytes.

Non-iter decoding functions do not provide access to the decoded buffers if the total size is not a multiple of word_size (only the remaining bytes).

The following example decodes a file and converts the decoded bytes to 4-bytes unsigned integers:

import lzip
import numpy

try:
    for chunk in lzip.decompress_file_iter("/path/to/archive.lz", 4):
        values = numpy.frombuffer(chunk, dtype="<u4")
except lzip.RemainingBytesError as error:
    # this block is executed only if the number of bytes in "/path/to/archive.lz"
    # is not a multiple of 4 (after decompression)
    print(error) # prints "The total number of bytes is not a multiple of 4 (k remaining)"
                 # where k is in [1, 3]
    # error.buffer is a bytes object and contains the k remaining bytes

Default parameters

The default parameters in lzip functions are not constants, despite what is presented in the documentation. The actual implementation looks like this:

def some_function(some_parameter=None):
    if some_parameter is None:
        some_paramter = some_paramter_default_value

This approach makes it possible to change default values at the module level at any time. For example:

import lzip

lzip.compress_to_file("/path/to/output0.lz", b"data to compress") # encoded at level 6 (default)

lzip.default_level = 9

lzip.compress_to_file("/path/to/output1.lz", b"data to compress") # encoded at level 9
lzip.compress_to_file("/path/to/output2.lz", b"data to compress") # encoded at level 9

lzip_default_level = 0

lzip.compress_to_file("/path/to/output1.lz", b"data to compress") # encoded at level 0

lzip exports the following default default values:

default_level = 6
default_word_size = 1
default_chunk_size = 1 << 16
default_member_size = 1 << 51

Publish

  1. Bump the version number in setup.py.

  2. Install Cubuzoa in a different directory (https://github.com/neuromorphicsystems/cubuzoa) to build pre-compiled versions for all major operating systems. Cubuzoa depends on VirtualBox (with its extension pack) and requires about 75 GB of free disk space.

cd cubuzoa
python3 cubuzoa.py provision
python3 cubuzoa.py build /path/to/lzip --post /path/to/lzip/test.py
  1. Install twine
pip3 install twine
  1. Upload the compiled wheels and the source code to PyPI:
python3 setup.py sdist --dist-dir wheels
python3 -m twine upload wheels/*

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lzip-1.0.3.tar.gz (14.0 kB view details)

Uploaded Source

Built Distributions

lzip-1.0.3-cp39-cp39-win_amd64.whl (47.1 kB view details)

Uploaded CPython 3.9 Windows x86-64

lzip-1.0.3-cp39-cp39-win32.whl (39.1 kB view details)

Uploaded CPython 3.9 Windows x86

lzip-1.0.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (73.3 kB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

lzip-1.0.3-cp39-cp39-macosx_11_0_x86_64.whl (47.6 kB view details)

Uploaded CPython 3.9 macOS 11.0+ x86-64

lzip-1.0.3-cp38-cp38-win_amd64.whl (47.2 kB view details)

Uploaded CPython 3.8 Windows x86-64

lzip-1.0.3-cp38-cp38-win32.whl (39.1 kB view details)

Uploaded CPython 3.8 Windows x86

lzip-1.0.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (73.3 kB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

lzip-1.0.3-cp38-cp38-macosx_11_0_x86_64.whl (47.6 kB view details)

Uploaded CPython 3.8 macOS 11.0+ x86-64

lzip-1.0.3-cp37-cp37m-win_amd64.whl (47.1 kB view details)

Uploaded CPython 3.7m Windows x86-64

lzip-1.0.3-cp37-cp37m-win32.whl (39.1 kB view details)

Uploaded CPython 3.7m Windows x86

lzip-1.0.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (73.3 kB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

lzip-1.0.3-cp37-cp37m-macosx_11_0_x86_64.whl (47.6 kB view details)

Uploaded CPython 3.7m macOS 11.0+ x86-64

File details

Details for the file lzip-1.0.3.tar.gz.

File metadata

  • Download URL: lzip-1.0.3.tar.gz
  • Upload date:
  • Size: 14.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for lzip-1.0.3.tar.gz
Algorithm Hash digest
SHA256 4d50e3d8650b5ec480abfd94ce112c9d2b6241f4e936d3cf44f0bab1089fb6ca
MD5 8fd5bf824c2c4374069953cc3cd00ef6
BLAKE2b-256 6e28acaf35c76f1cf541eb850c96b22a2266b3528724f1ccfecc160b8136dc22

See more details on using hashes here.

File details

Details for the file lzip-1.0.3-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: lzip-1.0.3-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 47.1 kB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for lzip-1.0.3-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 3a216caf4658c6cef426dfc643ef89801ea4f8255f855cf6270e999e9f606ba1
MD5 e9dbe7dbf8bd9aad054a153e4ae29b0c
BLAKE2b-256 c657d22267635f91e6fc03d1b74d5ab54cadbc637b1cd90bf225e6b7909dbc42

See more details on using hashes here.

File details

Details for the file lzip-1.0.3-cp39-cp39-win32.whl.

File metadata

  • Download URL: lzip-1.0.3-cp39-cp39-win32.whl
  • Upload date:
  • Size: 39.1 kB
  • Tags: CPython 3.9, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for lzip-1.0.3-cp39-cp39-win32.whl
Algorithm Hash digest
SHA256 2f2f8225d5cd80494d9fb51cface136cd0d6acf26b4a6f8a890fa2460b19f277
MD5 6f7b56cc46e8a880cb03b2e40a8f5387
BLAKE2b-256 aaa6dbcd71d09afba19077db1b55d79a68563115d50ea76a05dbe0cd1dc47df3

See more details on using hashes here.

File details

Details for the file lzip-1.0.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.0.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 a6aad1e4c793a3766c2f6893ddf99e5a00c1364ac4ef2dea437a73b94365e37e
MD5 7f5130f8530e93be4d7954712d6f1c82
BLAKE2b-256 b04914d64c48a8fdf42a0ccab86e71cc10bb6b34b9b49a072ce4322188a30e7e

See more details on using hashes here.

File details

Details for the file lzip-1.0.3-cp39-cp39-macosx_11_0_x86_64.whl.

File metadata

  • Download URL: lzip-1.0.3-cp39-cp39-macosx_11_0_x86_64.whl
  • Upload date:
  • Size: 47.6 kB
  • Tags: CPython 3.9, macOS 11.0+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for lzip-1.0.3-cp39-cp39-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 9756a4a96ff61d0f46ed803e06198a93298d9fdfd1de8d82a857062e4143e037
MD5 5ff4d1f78bbedcd41b68dc91b9cc081e
BLAKE2b-256 996716fe713c070d334056d306c53bc22ab9553c474af5bd65ae777502cf2775

See more details on using hashes here.

File details

Details for the file lzip-1.0.3-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: lzip-1.0.3-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 47.2 kB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for lzip-1.0.3-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 245f02480156b895dfefe95b1797fa415df719ac0e92fdedda8639f21edd52f3
MD5 21c38ea2dff3ba99a65ad75be6b628ec
BLAKE2b-256 2ebf23388971764cd06ffbde02cbfdda14ffe92583b43a1f61102ea8d6c70c65

See more details on using hashes here.

File details

Details for the file lzip-1.0.3-cp38-cp38-win32.whl.

File metadata

  • Download URL: lzip-1.0.3-cp38-cp38-win32.whl
  • Upload date:
  • Size: 39.1 kB
  • Tags: CPython 3.8, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for lzip-1.0.3-cp38-cp38-win32.whl
Algorithm Hash digest
SHA256 789878bef52ecd8b17eddd68dbabfd550f0fa7ec29ac0b48e8c76114db1771cb
MD5 9cb26b300d0fb2390b375e4439fbaa7b
BLAKE2b-256 4b3484ffa9ca7bfb07c135bc856674efd6904d2ddc614a02fa16af615cdf54df

See more details on using hashes here.

File details

Details for the file lzip-1.0.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.0.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e2351c1e681108c380e02e3b6e917ca9dee0bd24ee6d1cb718d35b6b11407938
MD5 1ae38d4b47f23be2ce17c8161307b9b9
BLAKE2b-256 eae35c73a52fa42c11b102dc7aaff7fbf16d130221e89ef4c93256a9e2983d99

See more details on using hashes here.

File details

Details for the file lzip-1.0.3-cp38-cp38-macosx_11_0_x86_64.whl.

File metadata

  • Download URL: lzip-1.0.3-cp38-cp38-macosx_11_0_x86_64.whl
  • Upload date:
  • Size: 47.6 kB
  • Tags: CPython 3.8, macOS 11.0+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for lzip-1.0.3-cp38-cp38-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 40f2ca1bf7d88a8d8699b9241024811d7d14239a26602d4d4f97748b787b848a
MD5 ee9e50a7d3626fca147e3f9a51913125
BLAKE2b-256 8896a658f728e6ab0fff35cbcfa85a122b65b6873af542b25e4d088fb8528742

See more details on using hashes here.

File details

Details for the file lzip-1.0.3-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: lzip-1.0.3-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 47.1 kB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for lzip-1.0.3-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 1e73cf08948e4b70944ad3800436964692fabc1d7c506e632f0acc1d2e468e5c
MD5 cbffdecda2b215c2a8cb1b65e17c59fe
BLAKE2b-256 0c9ac616ac58ae1cb10d69cf0a1b04c2e95e98c80df35c4a80ee7c715384cdc1

See more details on using hashes here.

File details

Details for the file lzip-1.0.3-cp37-cp37m-win32.whl.

File metadata

  • Download URL: lzip-1.0.3-cp37-cp37m-win32.whl
  • Upload date:
  • Size: 39.1 kB
  • Tags: CPython 3.7m, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for lzip-1.0.3-cp37-cp37m-win32.whl
Algorithm Hash digest
SHA256 63552962d91c1518f3b10706c5fc84b6524a686dc2d3c3adad45bd2e735f3132
MD5 5af71db67143104e9f387913b66e0c31
BLAKE2b-256 d8fd280e31c71b3f1cb4573c90f611d24660f0fe7c5bc2a6e1cc21c885234857

See more details on using hashes here.

File details

Details for the file lzip-1.0.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.0.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6b3c546624c52198da5e8b3e3e4af30eba4a27251da4a7e30b1e1bfe67951063
MD5 81de2223d214baa38e5f977bacf2f70e
BLAKE2b-256 6b6314619956091fc42068315c041dcbb11a87de74d08afa6a76b4a64ba0834d

See more details on using hashes here.

File details

Details for the file lzip-1.0.3-cp37-cp37m-macosx_11_0_x86_64.whl.

File metadata

  • Download URL: lzip-1.0.3-cp37-cp37m-macosx_11_0_x86_64.whl
  • Upload date:
  • Size: 47.6 kB
  • Tags: CPython 3.7m, macOS 11.0+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for lzip-1.0.3-cp37-cp37m-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 7c11926470a85bcf730bf2e76517cd929c39bae626f0e442a1a6deeee50fc049
MD5 78ab05295e90d7de5390d3be6c6417b9
BLAKE2b-256 8cc85a4ef59018448b946eb6fc6ac37c63a6756221f085b5cf739e60235ab4c2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page