Skip to main content

Lzip (.lz) archives compression and decompression with buffers and URLs support

Project description

lzip is a Python wrapper for lzlib (https://www.nongnu.org/lzip/lzlib.html) to encode and decode Lzip archives (https://www.nongnu.org/lzip/).

This package is compatible with arbitrary byte sequences but provides features to facilitate interoperability with Numpy's frombuffer and tobytes functions. Decoding and encoding can be performed in chunks, enabling the decompression, processing and compression of files that do not fit in RAM. URLs can be used as well to download, decompress and process the chunks of a remote Lzip archive in one go.

pip3 install lzip

Quickstart

Compress

Compress an in-memory buffer and write it to a file:

import lzip

lzip.compress_to_file("/path/to/output.lz", b"data to compress")

Compress multiple chunks and write the result to a single file (useful to avoid large in-memory buffers):

import lzip

with lzip.FileEncoder("/path/to/output.lz") as encoder:
    encoder.compress(b"data")
    encoder.compress(b" to")
    encoder.compress(b" compress")

Use FileEncoder without context management (with):

import lzip

encoder = lzip.FileEncoder("/path/to/output.lz")
encoder.compress(b"data")
encoder.compress(b" to")
encoder.compress(b" compress")
encoder.close()

Compress a Numpy array and write the result to a file:

import lzip
import numpy

values = numpy.arange(100, dtype="<u4")

lzip.compress_to_file("/path/to/output.lz", values.tobytes())

lzip can use different compression levels. See the documentation below for details.

Decompress

Read and decompress a file to an in-memory buffer:

import lzip

buffer = lzip.decompress_file("/path/to/input.lz")

Read and decompress a file one chunk at a time (useful for large files):

import lzip

for chunk in lzip.decompress_file_iter("/path/to/input.lz"):
    # chunk is a bytes object

Read and decompress a file one chunk at a time, and ensure that each chunk contains a number of bytes that is a multiple of word_size (useful to parse numpy arrays with a known dtype):

import lzip
import numpy

for chunk in lzip.decompress_file_iter("/path/to/input.lz", word_size=4):
    values = numpy.frombuffer(chunk, dtype="<u4")

Download and decompress data from a URL:

import lzip

# option 1: store the whole decompressed file in a single buffer
buffer = lzip.decompress_url("http://download.savannah.gnu.org/releases/lzip/lzip-1.22.tar.lz")

# option 2: iterate over the decompressed file in small chunks
for chunk in lzip.decompress_url_iter("http://download.savannah.gnu.org/releases/lzip/lzip-1.22.tar.lz"):
    # chunk is a bytes object

lzip can also decompress data from an in-memory buffer. See the documentation below for details.

Documentation

The present package contains two libraries. lzip deals with high-level operations (open and close files, download remote data, change default arguments...) whereas lzip_extension focuses on efficiently compressing and decompressing in-memory byte buffers.

lzip uses lzip_extension internally. The latter should only be used in advanced scenarios where fine buffer control is required.

lzip

FileEncoder

class FileEncoder:
    def __init__(self, path, level=6, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and write the compressed bytes to a file
        - path is the output file name, it must be a path-like object such as a string or a pathlib path
        - level must be either an integer in [0, 9] or a tuple (directory_size, match_length)
          0 is the fastest compression level, 9 is the slowest
          see https://www.nongnu.org/lzip/manual/lzip_manual.html for the mapping between
          integer levels, directory sizes and match lengths
        - member_size can be used to change the compressed file's maximum member size
          see the Lzip manual for details on the tradeoffs incurred by this value
        """

    def compress(self, buffer):
        """
        Encode a buffer and write the compressed bytes into the file
        - buffer must be a byte-like object, such as bytes or a bytearray
        """

    def close(self):
        """
        Flush the encoder contents and close the file

        compress must not be called after calling close
        Failing to call close results in a corrupted encoded file
        """

FileEncoder can be used as a context manager (with FileEncoder(...) as encoder). close is called automatically in this case.

BufferEncoder

class BufferEncoder:
    def __init__(self, level=6, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and return the compressed bytes as in-memory buffers
        - level: see FileEncoder
        - member_size: see FileEncoder
        """

    def compress(self, buffer):
        """
        Encode a buffer and return the compressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (non-compressed) buffers and
        output (conpressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b"") even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a bytes object

        compress must not be called after calling finish
        Failing to call finish results in corrupted encoded buffers
        """

RemainingBytesError

class RemainingBytesError(Exception):
    def __init__(self, word_size, buffer):
        """
        Raised by decompress_* functions if the total number of bytes is not a multiple of word_size
        The remaining bytes are stored in self.buffer
        See "Word size and remaining bytes" for details
        """

compress_to_buffer

def compress_to_buffer(buffer, level=6, member_size=(1 << 51)):
    """
    Encode a single buffer and return the compressed bytes as an in-memory buffer
    - buffer must be a byte-like object, such as bytes or a bytearray
    - level: see FileEncoder
    - member_size: see FileEncoder
    This function returns a bytes object
    """

compress_to_file

def compress_to_file(path, buffer, level=6, member_size=(1 << 51)):
    """
    Encode a single buffer and write the compressed bytes into a file
    - path is the output file name, it must be a path-like object such as a string or a pathlib path
    - buffer must be a byte-like object, such as bytes or a bytearray
    - level: see FileEncoder
    - member_size: see FileEncoder
    """

decompress_buffer

def decompress_buffer(buffer, word_size=1):
    """
    Decode a single buffer and return the decompressed bytes as an in-memory buffer
    - buffer must be a byte-like object, such as bytes or a bytearray
    - word_size: see "Word size and remaining bytes"
    This function returns a bytes object
    """

decompress_buffer_iter

def decompress_buffer_iter(buffer, word_size=1):
    """
    Decode a single buffer and return an in-memory buffer iterator
    - buffer must be a byte-like object, such as bytes or a bytearray
    - word_size: see "Word size and remaining bytes"
    This function returns a bytes object iterator
    """

decompress_file

def decompress_file(path, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file and return the decompressed bytes as an in-memory buffer
    - path is the input file name, it must be a path-like object such as a string or a pathlib path
    - word_size: see "Word size and remaining bytes"
    - chunk_size: the number of bytes to read from the file at once
      large values increase memory usage but very small values impede performance
    This function returns a bytes object
    """

decompress_file_iter

def decompress_file_iter(path, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file and return an in-memory buffer iterator
    - path is the input file name, it must be a path-like object such as a string or a pathlib path
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

decompress_file_like

def decompress_file_like(file_like, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file-like object and return the decompressed bytes as an in-memory buffer
    - file_like is a file-like object, such as a file or a HTTP response
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object
    """

decompress_file_like_iter

def decompress_file_like_iter(file_like, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file-like object and return an in-memory buffer iterator
    - file_like is a file-like object, such as a file or a HTTP response
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

decompress_url

def decompress_url(
    url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, cafile=None, capath=None, context=None,
    word_size=1,
    chunk_size=(1 << 16)):
    """
    Download and decode data from a URL and return the decompressed bytes as an in-memory buffer
    - url must be a string or a urllib.Request object
    - data, timeout, cafile, capath and context are passed to urllib.request.urlopen
      see https://docs.python.org/3/library/urllib.request.html for details
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object
    """

decompress_url_iter

def decompress_url_iter(
    url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, cafile=None, capath=None, context=None,
    word_size=1,
    chunk_size=(1 << 16)):
    """
    Download and decode data from a URL and return an in-memory buffer iterator
    - url must be a string or a urllib.Request object
    - data, timeout, cafile, capath and context are passed to urllib.request.urlopen
      see https://docs.python.org/3/library/urllib.request.html for details
    - word_size: see "Word size and remaining bytes"
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

lzip_extension

Even though lzip_extension behaves like a conventional Python module, it is written in C++. To keep the implementation simple, only positional arguments are supported (keyword arguments do not work). The Python classes documented below are equivalent to the classes exported by this low-level implementation.

You can use lzip_extension by importing it like any other module. lzip.py uses it extensively.

Decoder

class Decoder:
    def __init__(self, word_size=1):
        """
        Decode sequential byte buffers and return the decompressed bytes as in-memory buffers
        - word_size is a non-zero positive integer
          all the output buffers contain a number of bytes that is a multiple of word_size
        """

    def decompress(self, buffer):
        """
        Decode a buffer and return the decompressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (compressed) buffers and
        output (decompressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b"") even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a tuple (buffer, remaining_bytes)
          Both buffer and remaining_bytes and bytes objects
          buffer should be empty (b"") unless the file was truncated
          remaining_bytes is empty (b"") unless the total number of bytes decoded
          is not a multiple of word_size

        decompress must not be called after calling finish
        Failing to call finish delays garbage collection which can be an issue
        when decoding many files in a row, and prevents the algorithm from detecting
        remaining bytes (if the size is not a multiple of word_size)
        """

Encoder

class Encoder:
    def __init__(self, dictionary_size=(1 << 23), match_len_limit=36, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and return the compressed bytes as in-memory buffers
        - dictionary_size is an integer in the range [(1 << 12), (1 << 29)]
        - match_len_limit is an integer in the range [5, 273]
        - member_size is an integer in the range [(1 << 12), (1 << 51)]
        """

    def compress(self, buffer):
        """
        Encode a buffer and return the compressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (decompressed) buffers and
        output (compressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b"") even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a bytes object

        compress must not be called after calling finish
        Failing to call finish results in corrupted encoded buffers
        """

Compare options

The script compare_options.py uses the lzip library to compare the compression ratio of different pairs (dictionary_size, match_len_limit). It runs multiple compressions in parallel and does not store the compressed bytes. About 3 GB of RAM are required to run the script. Processing time depends on the file size and the number of processors on the machine.

The script requires matplotlib (pip3 install matplotlib) to display the results.

python3 compare_options /path/to/uncompressed/file [--chunk-size=65536]

Word size and remaining bytes

Decoding functions take an optional parameter word_size that defaults to 1. Decoded buffers are guaranteed to contain a number of bytes that is a multiple of word_size to facilitate fixed-sized words parsing (for example numpy.frombytes). If the total size of the uncompressed archive is not a multiple of word_size, lzip.RemainingBytesError is raised after iterating over the last chunk. The raised exception provides access to the remaining bytes.

Non-iter decoding functions do not provide access to the decoded buffers if the total size is not a multiple of word_size (only the remaining bytes).

The following example decodes a file and converts the decoded bytes to 4-bytes unsigned integers:

import lzip
import numpy

try:
    for chunk in lzip.decompress_file_iter("/path/to/archive.lz", 4):
        values = numpy.frombuffer(chunk, dtype="<u4")
except lzip.RemainingBytesError as error:
    # this block is executed only if the number of bytes in "/path/to/archive.lz"
    # is not a multiple of 4 (after decompression)
    print(error) # prints "The total number of bytes is not a multiple of 4 (k remaining)"
                 # where k is in [1, 3]
    # error.buffer is a bytes object and contains the k remaining bytes

Default parameters

The default parameters in lzip functions are not constants, despite what is presented in the documentation. The actual implementation looks like this:

def some_function(some_parameter=None):
    if some_parameter is None:
        some_paramter = some_paramter_default_value

This approach makes it possible to change default values at the module level at any time. For example:

import lzip

lzip.compress_to_file("/path/to/output0.lz", b"data to compress") # encoded at level 6 (default)

lzip.default_level = 9

lzip.compress_to_file("/path/to/output1.lz", b"data to compress") # encoded at level 9
lzip.compress_to_file("/path/to/output2.lz", b"data to compress") # encoded at level 9

lzip_default_level = 0

lzip.compress_to_file("/path/to/output1.lz", b"data to compress") # encoded at level 0

lzip exports the following default default values:

default_level = 6
default_word_size = 1
default_chunk_size = 1 << 16
default_member_size = 1 << 51

Publish

  1. Bump the version number in setup.py.

  2. Install Cubuzoa in a different directory (https://github.com/neuromorphicsystems/cubuzoa) to build pre-compiled versions for all major operating systems. Cubuzoa depends on VirtualBox (with its extension pack) and requires about 75 GB of free disk space.

cd cubuzoa
python3 cubuzoa.py provision
python3 cubuzoa.py build /path/to/lzip
  1. Install twine
pip3 install twine
  1. Upload the compiled wheels and the source code to PyPI:
python3 setup.py sdist --dist-dir wheels
python3 -m twine upload wheels/*

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lzip-1.0.2.tar.gz (13.9 kB view details)

Uploaded Source

Built Distributions

lzip-1.0.2-cp39-cp39-win_amd64.whl (47.0 kB view details)

Uploaded CPython 3.9 Windows x86-64

lzip-1.0.2-cp39-cp39-win32.whl (39.0 kB view details)

Uploaded CPython 3.9 Windows x86

lzip-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (73.2 kB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

lzip-1.0.2-cp39-cp39-macosx_11_0_x86_64.whl (47.5 kB view details)

Uploaded CPython 3.9 macOS 11.0+ x86-64

lzip-1.0.2-cp38-cp38-win_amd64.whl (47.0 kB view details)

Uploaded CPython 3.8 Windows x86-64

lzip-1.0.2-cp38-cp38-win32.whl (39.0 kB view details)

Uploaded CPython 3.8 Windows x86

lzip-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (73.2 kB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

lzip-1.0.2-cp38-cp38-macosx_11_0_x86_64.whl (47.5 kB view details)

Uploaded CPython 3.8 macOS 11.0+ x86-64

lzip-1.0.2-cp37-cp37m-win_amd64.whl (47.0 kB view details)

Uploaded CPython 3.7m Windows x86-64

lzip-1.0.2-cp37-cp37m-win32.whl (39.0 kB view details)

Uploaded CPython 3.7m Windows x86

lzip-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (73.2 kB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

lzip-1.0.2-cp37-cp37m-macosx_11_0_x86_64.whl (47.6 kB view details)

Uploaded CPython 3.7m macOS 11.0+ x86-64

File details

Details for the file lzip-1.0.2.tar.gz.

File metadata

  • Download URL: lzip-1.0.2.tar.gz
  • Upload date:
  • Size: 13.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for lzip-1.0.2.tar.gz
Algorithm Hash digest
SHA256 df3aaa3ce051e1fadc56239db322b6429e721b8983246e3807e784380399179b
MD5 aea1fc1cd08c5cfd9f331c48820290db
BLAKE2b-256 0c38108e16f0138dfe6c6efbe0b928a6bb552a0462f0ea44498f0aa3b37cad5b

See more details on using hashes here.

File details

Details for the file lzip-1.0.2-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: lzip-1.0.2-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 47.0 kB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for lzip-1.0.2-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 3300b3f9fa66185b95e8a3f762fe5e509b725b066551bc3663aca703e62c96e2
MD5 d4e3b68ab840c4a85ab69f5b8446b2e0
BLAKE2b-256 ddd356ed3651bf28b2553469cd4b69793f955582515b1f1d668c4034cca979ca

See more details on using hashes here.

File details

Details for the file lzip-1.0.2-cp39-cp39-win32.whl.

File metadata

  • Download URL: lzip-1.0.2-cp39-cp39-win32.whl
  • Upload date:
  • Size: 39.0 kB
  • Tags: CPython 3.9, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for lzip-1.0.2-cp39-cp39-win32.whl
Algorithm Hash digest
SHA256 def87343714a8973d3d717aa474c5063dc9154f80dd92e3c8a3a39279a1d54c9
MD5 7f6eac86346e5c14245d3171216c441e
BLAKE2b-256 d5c3207b6d30bf3a467011ffb76b459c9a45960b100a9f236e02671ae2d2da60

See more details on using hashes here.

File details

Details for the file lzip-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 18d852d80bcf8f87c9604c264a15ba1e02e31eee722a11a7dcce2a0e762893f0
MD5 f549f61b50ac9715ab0b6144a4d60b4f
BLAKE2b-256 2fc32806036b1f38652c21bf1285d620fbb9f17fe3633c6108c832da0735db5f

See more details on using hashes here.

File details

Details for the file lzip-1.0.2-cp39-cp39-macosx_11_0_x86_64.whl.

File metadata

  • Download URL: lzip-1.0.2-cp39-cp39-macosx_11_0_x86_64.whl
  • Upload date:
  • Size: 47.5 kB
  • Tags: CPython 3.9, macOS 11.0+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for lzip-1.0.2-cp39-cp39-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 65d74d5fbec84d0adaac7a7ecb2d86b0df755c29cbd8ceed9000c0aa5a1fda06
MD5 e818989fdf6fd1e1ec05b6aae7c8dfbe
BLAKE2b-256 6530ead4c2e6a02a47fef6c0505b42b4a13624fdee1d9305fafdb7d82657d301

See more details on using hashes here.

File details

Details for the file lzip-1.0.2-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: lzip-1.0.2-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 47.0 kB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for lzip-1.0.2-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 988018ff3fd41cffca664c64b73e78af1961a06fcce6c80f9fece68a123d9a98
MD5 c61d1a96c9c9e67560c980237c006a1a
BLAKE2b-256 fd41e9df31c2cc8e314f6958de2499c8540bb3120c44a14f61f9b2267bbf929b

See more details on using hashes here.

File details

Details for the file lzip-1.0.2-cp38-cp38-win32.whl.

File metadata

  • Download URL: lzip-1.0.2-cp38-cp38-win32.whl
  • Upload date:
  • Size: 39.0 kB
  • Tags: CPython 3.8, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for lzip-1.0.2-cp38-cp38-win32.whl
Algorithm Hash digest
SHA256 1b6cc03a1df850bb749ca9118cb5e6eb9a2c1e6ace85b1b03cae44caa94dade9
MD5 1c51da083906f4a03354c82e016b284c
BLAKE2b-256 af85a1195b1412b6f35db462bb898970a0b76f21848d92e661e5a93e27e08a11

See more details on using hashes here.

File details

Details for the file lzip-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ca1ce2e1ff100161ce65e8b4fea5bfd0cf6afb2f45797129195f0600a52aa62e
MD5 3a7a8407a4e4a42d610c0753896b0ba7
BLAKE2b-256 9cede44a6b51a9ad39a1a6331340daea54fe1fee831d7267d3b57606032bee6a

See more details on using hashes here.

File details

Details for the file lzip-1.0.2-cp38-cp38-macosx_11_0_x86_64.whl.

File metadata

  • Download URL: lzip-1.0.2-cp38-cp38-macosx_11_0_x86_64.whl
  • Upload date:
  • Size: 47.5 kB
  • Tags: CPython 3.8, macOS 11.0+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for lzip-1.0.2-cp38-cp38-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 ff3197a5e3993959bafad438f147a6fc00b221c6cce08003b912c1cb6aa23224
MD5 aa0696e56f7fb35c071487c351d7b74f
BLAKE2b-256 0a727ee1701b37aae1528c847bac01c8568c6bfd8663e7311f532eac2e089a44

See more details on using hashes here.

File details

Details for the file lzip-1.0.2-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: lzip-1.0.2-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 47.0 kB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for lzip-1.0.2-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 fb17e32c22fa619fd2a67ae592cd43a39e038e8adf8761a9a4d28d20296ba9f4
MD5 37c5b786fa6bb472ab34a81bb662a4c4
BLAKE2b-256 8a3c7feb7ce0c53e5f2895e39cf57acbd6da832bb97d4ec73ae3536bb54c0831

See more details on using hashes here.

File details

Details for the file lzip-1.0.2-cp37-cp37m-win32.whl.

File metadata

  • Download URL: lzip-1.0.2-cp37-cp37m-win32.whl
  • Upload date:
  • Size: 39.0 kB
  • Tags: CPython 3.7m, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for lzip-1.0.2-cp37-cp37m-win32.whl
Algorithm Hash digest
SHA256 2fdb837327e35fe9e0e263c4ec422fd9524382a0f7a14050a2410421d7504b8c
MD5 52432bca2a52f3f27521a28e9c8ee497
BLAKE2b-256 d7746a06ef85af60be3df4910552d61131a115af514437d58a04e166f412082a

See more details on using hashes here.

File details

Details for the file lzip-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 b186a9e4fce5d0d61e5235499134600a588abe4d551e942aa60a6625d4692cfa
MD5 75da33897ff00c209827e51fbf4ca5bb
BLAKE2b-256 3081a473ebb13315dc9efd737c2c98a0d3a5d7a1303bac79c68dfc837989a59f

See more details on using hashes here.

File details

Details for the file lzip-1.0.2-cp37-cp37m-macosx_11_0_x86_64.whl.

File metadata

  • Download URL: lzip-1.0.2-cp37-cp37m-macosx_11_0_x86_64.whl
  • Upload date:
  • Size: 47.6 kB
  • Tags: CPython 3.7m, macOS 11.0+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for lzip-1.0.2-cp37-cp37m-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 230a63020f5e66c3aa32e1123e2cb9e2412421aee1eaefa44ff6418272fcbf9f
MD5 98da30353e92caab539dfce80509e245
BLAKE2b-256 3c1943d0edc57dd4c9ad0ee3fe5b9a35c887f2a503cdaa1d18370d37cd998560

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page