Skip to main content

decompress lzip archives

Project description

lzip is a Python wrapper for lzlib (https://www.nongnu.org/lzip/lzlib.html) to encode and decode Lzip archives (https://www.nongnu.org/lzip/).

This package is compatible with arbitrary byte sequences but provides features to facilitate interoperability with Numpy's frombuffer and tobytes functions. Decoding and encoding can be performed in chunks, enabling the decompression, processing and compression of files that do not fit in RAM. URLs can be used as well to download, decompress and process the chunks of a remote Lzip archive in one go.

pip3 install lzip

Quickstart

Compress

Compress an in-memory buffer and write it to a file:

import lzip

lzip.compress_to_file('/path/to/output.lz', b'data to compress')

Compress multiple chunks and write the result to a single file (useful to avoid large in-memory buffers):

import lzip

with lzip.FileEncoder('/path/to/output.lz') as encoder:
    encoder.compress(b'data')
    encoder.compress(b' to')
    encoder.compress(b' compress')

Use FileEncoder without context management (with):

import lzip

encoder = lzip.FileEncoder('/path/to/output.lz')
encoder.compress(b'data')
encoder.compress(b' to')
encoder.compress(b' compress')
encoder.close()

Compress a Numpy array and write the result to a file:

import lzip
import numpy

values = numpy.arange(100, dtype='<u4')

lzip.compress_to_file('/path/to/output.lz', values.tobytes())

lzip can use different compression levels. See the documentation below for details.

Decompress

Read and decompress a file to an in-memory buffer:

import lzip

buffer = lzip.decompress_file('/path/to/input.lz')

Read and decompress a file one chunk at a time (useful for large files):

import lzip

for chunk in lzip.decompress_file_iter('/path/to/input.lz'):
    # chunk is a bytes object

Read and decompress a file one chunk at a time, and ensure that each chunk contains a number of bytes that is a multiple of word_size (useful to parse numpy arrays with a known dtype):

import lzip
import numpy

for chunk in lzip.decompress_file_iter('/path/to/input.lz', word_size=4):
    values = numpy.frombuffer(chunk, dtype='<u4')

Download and decompress data from a URL:

import lzip

# option 1: store the whole decompressed file in a single buffer
buffer = lzip.decompress_url('http://download.savannah.gnu.org/releases/lzip/lzip-1.22.tar.lz')

# option 2: iterate over the decompressed file in small chunks
for chunk in lzip.decompress_url_iter('http://download.savannah.gnu.org/releases/lzip/lzip-1.22.tar.lz'):
    # chunk is a bytes object

lzip can also decompress data from an in-memory buffer. See the documentation below for details.

Documentation

The present package contains two libraries. lzip deals with high-level operations (open and close files, download remote data, change default arguments...) whereas lzip_extension focuses on efficiently compressing and decompressing in-memory byte buffers.

lzip uses lzip_extension internally. The latter should only be used in advanced scenarios where fine buffer control is required.

lzip

FileEncoder

class FileEncoder:
    def __init__(self, path, level=6, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and write the compressed bytes to a file
        - path is the output file name, it must be a path-like object such as a string or a pathlib path
        - level must be either an integer in [0, 9] or a tuple (directory_size, match_length)
          0 is the fastest compression level, 9 is the slowest
          see https://www.nongnu.org/lzip/manual/lzip_manual.html for the mapping between
          integer levels, directory sizes and match lengths
        - member_size can be used to change the compressed file's maximum member size
          see the Lzip manual for details on the tradeoffs incurred by this value
        """

    def compress(self, buffer):
        """
        Encode a buffer and write the compressed bytes into the file
        - buffer must be a byte-like object, such as bytes or a bytearray
        """

    def close(self):
        """
        Flush the encoder contents and close the file

        compress must not be called after calling close
        Failing to call close results in a corrupted encoded file
        """

FileEncoder can be used as a context manager (with FileEncoder(...) as encoder). close is called automatically in this case.

BufferEncoder

class BufferEncoder:
    def __init__(self, level=6, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and return the compressed bytes as in-memory buffers
        - level: see FileEncoder
        - member_size: see FileEncoder
        """

    def compress(self, buffer):
        """
        Encode a buffer and return the compressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (non-compressed) buffers and
        output (conpressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b'') even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a bytes object

        compress must not be called after calling finish
        Failing to call finish results in corrupted encoded buffers
        """

RemainingBytesError

class RemainingBytesError(Exception):
    def __init__(self, word_size, buffer):
        """
        Raised by decompress_* functions if the total number of bytes is not a multiple of word_size
        The remaining bytes are stored in self.buffer
        See 'Word size and remaining bytes' for details
        """

compress_to_buffer

def compress_to_buffer(buffer, level=6, member_size=(1 << 51)):
    """
    Encode a single buffer and return the compressed bytes as an in-memory buffer
    - buffer must be a byte-like object, such as bytes or a bytearray
    - level: see FileEncoder
    - member_size: see FileEncoder
    This function returns a bytes object
    """

compress_to_file

def compress_to_file(path, buffer, level=6, member_size=(1 << 51)):
    """
    Encode a single buffer and write the compressed bytes into a file
    - path is the output file name, it must be a path-like object such as a string or a pathlib path
    - buffer must be a byte-like object, such as bytes or a bytearray
    - level: see FileEncoder
    - member_size: see FileEncoder
    """

decompress_buffer

def decompress_buffer(buffer, word_size=1):
    """
    Decode a single buffer and return the decompressed bytes as an in-memory buffer
    - buffer must be a byte-like object, such as bytes or a bytearray
    - word_size: see 'Word size and remaining bytes'
    This function returns a bytes object
    """

decompress_buffer_iter

def decompress_buffer_iter(buffer, word_size=1):
    """
    Decode a single buffer and return an in-memory buffer iterator
    - buffer must be a byte-like object, such as bytes or a bytearray
    - word_size: see 'Word size and remaining bytes'
    This function returns a bytes object iterator
    """

decompress_file

def decompress_file(path, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file and return the decompressed bytes as an in-memory buffer
    - path is the input file name, it must be a path-like object such as a string or a pathlib path
    - word_size: see 'Word size and remaining bytes'
    - chunk_size: the number of bytes to read from the file at once
      large values increase memory usage but very small values impede performance
    This function returns a bytes object
    """

decompress_file_iter

def decompress_file_iter(path, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file and return an in-memory buffer iterator
    - path is the input file name, it must be a path-like object such as a string or a pathlib path
    - word_size: see 'Word size and remaining bytes'
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

decompress_file_like

def decompress_file_like(file_like, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file-like object and return the decompressed bytes as an in-memory buffer
    - file_like is a file-like object, such as a file or a HTTP response
    - word_size: see 'Word size and remaining bytes'
    - chunk_size: see decompress_file
    This function returns a bytes object
    """

decompress_file_like_iter

def decompress_file_like_iter(file_like, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file-like object and return an in-memory buffer iterator
    - file_like is a file-like object, such as a file or a HTTP response
    - word_size: see 'Word size and remaining bytes'
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

decompress_url

def decompress_url(
    url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, cafile=None, capath=None, context=None,
    word_size=1,
    chunk_size=(1 << 16)):
    """
    Download and decode data from a URL and return the decompressed bytes as an in-memory buffer
    - url must be a string or a urllib.Request object
    - data, timeout, cafile, capath and context are passed to urllib.request.urlopen
      see https://docs.python.org/3/library/urllib.request.html for details
    - word_size: see 'Word size and remaining bytes'
    - chunk_size: see decompress_file
    This function returns a bytes object
    """

decompress_url_iter

def decompress_url_iter(
    url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, cafile=None, capath=None, context=None,
    word_size=1,
    chunk_size=(1 << 16)):
    """
    Download and decode data from a URL and return an in-memory buffer iterator
    - url must be a string or a urllib.Request object
    - data, timeout, cafile, capath and context are passed to urllib.request.urlopen
      see https://docs.python.org/3/library/urllib.request.html for details
    - word_size: see 'Word size and remaining bytes'
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

lzip_extension

Even though lzip_extension behaves like a conventional Python module, it is written in C++. To keep the implementation simple, only positional arguments are supported (keyword arguments do not work). The Python classes documented below are equivalent to the classes exported by this low-level implementation.

You can use lzip_extension by importing it like any other module. lzip.py uses it extensively.

Decoder

class Decoder:
    def __init__(self, word_size=1):
        """
        Decode sequential byte buffers and return the decompressed bytes as in-memory buffers
        - word_size is a non-zero positive integer
          all the output buffers contain a number of bytes that is a multiple of word_size
        """

    def decompress(self, buffer):
        """
        Decode a buffer and return the decompressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (compressed) buffers and
        output (decompressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b'') even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a tuple (buffer, remaining_bytes)
          Both buffer and remaining_bytes and bytes objects
          buffer should be empty (b'') unless the file was truncated
          remaining_bytes is empty (b'') unless the total number of bytes decoded
          is not a multiple of word_size

        decompress must not be called after calling finish
        Failing to call finish delays garbage collection which can be an issue
        when decoding many files in a row, and prevents the algorithm from detecting
        remaining bytes (if the size is not a multiple of word_size)
        """

Encoder

class Encoder:
    def __init__(self, dictionary_size=(1 << 23), match_len_limit=36, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and return the compressed bytes as in-memory buffers
        - dictionary_size is an integer in the range [(1 << 12), (1 << 29)]
        - match_len_limit is an integer in the range [5, 273]
        - member_size is an integer in the range [(1 << 12), (1 << 51)]
        """

    def compress(self, buffer):
        """
        Encode a buffer and return the compressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (decompressed) buffers and
        output (compressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b'') even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a bytes object

        compress must not be called after calling finish
        Failing to call finish results in corrupted encoded buffers
        """

Compare options

The script compare_options.py uses the lzip library to compare the compression ratio of different pairs (dictionary_size, match_len_limit). It runs multiple compressions in parallel and does not store the compressed bytes. About 3 GB of RAM are required to run the script. Processing time depends on the file size and the number of processors on the machine.

The script requires matplotlib (pip3 install matplotlib) to display the results.

python3 compare_options /path/to/uncompressed/file [--chunk-size=65536]

Word size and remaining bytes

Decoding functions take an optional parameter word_size that defaults to 1. Decoded buffers are guaranteed to contain a number of bytes that is a multiple of word_size to facilitate fixed-sized words parsing (for example numpy.frombytes). If the total size of the uncompressed archive is not a multiple of word_size, lzip.RemainingBytesError is raised after iterating over the last chunk. The raised exception provides access to the remaining bytes.

Non-iter decoding functions do not provide access to the decoded buffers if the total size is not a multiple of word_size (only the remaining bytes).

The following example decodes a file and converts the decoded bytes to 4-bytes unsigned integers:

import lzip
import numpy

try:
    for chunk in lzip.decompress_file_iter('/path/to/archive.lz', 4):
        values = numpy.frombuffer(chunk, dtype='<u4')
except lzip.RemainingBytesError as error:
    # this block is executed only if the number of bytes in '/path/to/archive.lz'
    # is not a multiple of 4 (after decompression)
    print(error) # prints 'The total number of bytes is not a multiple of 4 (k remaining)'
                 # where k is in [1, 3]
    # error.buffer is a bytes object and contains the k remaining bytes

Default parameters

The default parameters in lzip functions are not constants, despite what is presented in the documentation. The actual implementation looks like this:

def some_function(some_parameter=None):
    if some_parameter is None:
        some_paramter = some_paramter_default_value

This approach makes it possible to change default values at the module level at any time. For example:

import lzip

lzip.compress_to_file('/path/to/output0.lz', b'data to compress') # encoded at level 6 (default)

lzip.default_level = 9

lzip.compress_to_file('/path/to/output1.lz', b'data to compress') # encoded at level 9
lzip.compress_to_file('/path/to/output2.lz', b'data to compress') # encoded at level 9

lzip_default_level = 0

lzip.compress_to_file('/path/to/output1.lz', b'data to compress') # encoded at level 0

lzip exports the following default default values:

default_level = 6
default_word_size = 1
default_chunk_size = 1 << 16
default_member_size = 1 << 51

Publish

  1. Bump the version number in setup.py.

  2. Install Cubuzoa in a different directory (https://github.com/neuromorphicsystems/cubuzoa) to build pre-compiled versions for all major operating systems. Cubuzoa depends on VirtualBox (with its extension pack) and requires about 75 GB of free disk space.

cd cubuzoa
python3 cubuzoa.py provision
python3 cubuzoa.py build /path/to/event_stream
  1. Install twine
pip3 install twine
  1. Upload the compiled wheels and the source code to PyPI:
python3 setup.py sdist --dist-dir wheels
python3 -m twine upload wheels/*

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lzip-1.0.0.tar.gz (15.7 kB view details)

Uploaded Source

Built Distributions

lzip-1.0.0-cp39-cp39-win_amd64.whl (46.8 kB view details)

Uploaded CPython 3.9 Windows x86-64

lzip-1.0.0-cp39-cp39-win32.whl (38.8 kB view details)

Uploaded CPython 3.9 Windows x86

lzip-1.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (73.0 kB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

lzip-1.0.0-cp39-cp39-macosx_11_0_x86_64.whl (47.4 kB view details)

Uploaded CPython 3.9 macOS 11.0+ x86-64

lzip-1.0.0-cp38-cp38-win_amd64.whl (46.8 kB view details)

Uploaded CPython 3.8 Windows x86-64

lzip-1.0.0-cp38-cp38-win32.whl (38.8 kB view details)

Uploaded CPython 3.8 Windows x86

lzip-1.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (73.0 kB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

lzip-1.0.0-cp38-cp38-macosx_11_0_x86_64.whl (47.4 kB view details)

Uploaded CPython 3.8 macOS 11.0+ x86-64

lzip-1.0.0-cp37-cp37m-win_amd64.whl (46.8 kB view details)

Uploaded CPython 3.7m Windows x86-64

lzip-1.0.0-cp37-cp37m-win32.whl (38.8 kB view details)

Uploaded CPython 3.7m Windows x86

lzip-1.0.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (73.0 kB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

lzip-1.0.0-cp37-cp37m-macosx_11_0_x86_64.whl (47.4 kB view details)

Uploaded CPython 3.7m macOS 11.0+ x86-64

File details

Details for the file lzip-1.0.0.tar.gz.

File metadata

  • Download URL: lzip-1.0.0.tar.gz
  • Upload date:
  • Size: 15.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.0.tar.gz
Algorithm Hash digest
SHA256 9faaca262bb06dd3d601af48f198c2ee93af54bae872429fe9aa8cc9cbfe0a39
MD5 4ee8e1b0f38e5701b7e948e6520abe8b
BLAKE2b-256 c87bd8fa71ce1322eb3a44aad72d9b10b175a19ae35b0e9eff25098ef4b808f7

See more details on using hashes here.

File details

Details for the file lzip-1.0.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: lzip-1.0.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 46.8 kB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 1e5d2e7d7715f1b7a07d88b8b6a3b351c24290802cbb0d9414c6f5a5946fb288
MD5 e1bc0a45988154f26820f2824cc14aba
BLAKE2b-256 163d3b7ad103c43f0573cc8cd67882d95c8498cfc999acf33b1d30e8db2ea62c

See more details on using hashes here.

File details

Details for the file lzip-1.0.0-cp39-cp39-win32.whl.

File metadata

  • Download URL: lzip-1.0.0-cp39-cp39-win32.whl
  • Upload date:
  • Size: 38.8 kB
  • Tags: CPython 3.9, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.0-cp39-cp39-win32.whl
Algorithm Hash digest
SHA256 46fd8d33b4fb5c67837d3682d0d51ed180d4e257164eccc8704ca36f568b2eb0
MD5 60429b84683a2c583510fd474840b503
BLAKE2b-256 077d52197b45f0454559fbc663414a77a8500c7cea05ce89104f41e260f42c33

See more details on using hashes here.

File details

Details for the file lzip-1.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ed3ac2bc4cee5d39acc1ec71cd18601cc48089d11b788db9f8dc1894a59b288e
MD5 b8e0bd4350bb4d91ab151fefe0837b6a
BLAKE2b-256 21a92fa876fe905575381f02c3ffe103be6113973e4984b57264e2d57c50194d

See more details on using hashes here.

File details

Details for the file lzip-1.0.0-cp39-cp39-macosx_11_0_x86_64.whl.

File metadata

  • Download URL: lzip-1.0.0-cp39-cp39-macosx_11_0_x86_64.whl
  • Upload date:
  • Size: 47.4 kB
  • Tags: CPython 3.9, macOS 11.0+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.0-cp39-cp39-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 2606c39203b7b22e50af03fcb43f8cc4fa11a2e7b498d0187b44c01f5e9394e0
MD5 66ca8a66b5ef4db8cb8516ebfcbdc11b
BLAKE2b-256 9c7c24a4b28853b742763be98363dd98b326481a1ef6960d82962441b15d3c33

See more details on using hashes here.

File details

Details for the file lzip-1.0.0-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: lzip-1.0.0-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 46.8 kB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.0-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 d0605cdd87e4ad315c041407c7a866e437cf6f51fec7e1e732a4bfd650d729dd
MD5 c614fb4cfa834f3fcaf1d35ff5d8d6ad
BLAKE2b-256 60375d1afdb7287865972d020278f91a5412d22530c32b5987a6518fdbe8fb6c

See more details on using hashes here.

File details

Details for the file lzip-1.0.0-cp38-cp38-win32.whl.

File metadata

  • Download URL: lzip-1.0.0-cp38-cp38-win32.whl
  • Upload date:
  • Size: 38.8 kB
  • Tags: CPython 3.8, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.0-cp38-cp38-win32.whl
Algorithm Hash digest
SHA256 f2ef2a825133f9670f3530dd0b88ac811a7a32a7b6f90dd6d9fc5ed8da00491a
MD5 f0f383b56b144de293f5c5bbbe66ccd7
BLAKE2b-256 e67a887aa8c3889f7aa31ae502eac67987fc63ed28719ebf0ec7262979cfaf93

See more details on using hashes here.

File details

Details for the file lzip-1.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 4190915aeec35ae7e9f58da53b3dab79b793be59e51ab9ab2f4561da2ed4a215
MD5 03314730a2406a12b0d6c866a1f967c1
BLAKE2b-256 dbfc28650bca03ec94f6222b88c273e118490805d20e1fc843e239b4ce659230

See more details on using hashes here.

File details

Details for the file lzip-1.0.0-cp38-cp38-macosx_11_0_x86_64.whl.

File metadata

  • Download URL: lzip-1.0.0-cp38-cp38-macosx_11_0_x86_64.whl
  • Upload date:
  • Size: 47.4 kB
  • Tags: CPython 3.8, macOS 11.0+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.0-cp38-cp38-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 62e020619e029d10e2f52dcab1abbab2ca26c8e78a2927805b6e9f96dab98388
MD5 ee77b2f6a330276ea1bed294529752b6
BLAKE2b-256 9ab8dedc78f1212ae9f9c74546135e7d4f87e6ca506bcafe3cdd78b28b686a6e

See more details on using hashes here.

File details

Details for the file lzip-1.0.0-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: lzip-1.0.0-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 46.8 kB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.0-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 5e37bed3b4aeaf32f634bde29e05cf7f51432c9f3674bb350900e0197dea288b
MD5 6a75b6e68da9fcac171f08b954232e84
BLAKE2b-256 186588ad1ce4497b88d0f02b24420eb498bab3593f541adebee045b58bb805c3

See more details on using hashes here.

File details

Details for the file lzip-1.0.0-cp37-cp37m-win32.whl.

File metadata

  • Download URL: lzip-1.0.0-cp37-cp37m-win32.whl
  • Upload date:
  • Size: 38.8 kB
  • Tags: CPython 3.7m, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.0-cp37-cp37m-win32.whl
Algorithm Hash digest
SHA256 b4557531cff5ace1599f47279b3c807ddb8502d2d396bcae1e44195538430300
MD5 31717114b90b864c36dd3275dd56fbf3
BLAKE2b-256 edd061f6beb36522ebe40571b592cb482e084a2f0cbfee9dd20b1f672da50c05

See more details on using hashes here.

File details

Details for the file lzip-1.0.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.0.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6764b3940daea00974cd008ee190fbe11d87471bf164e0983ae6470778626ad6
MD5 e1dac504aeb6f924f4dcea678d9a6cbe
BLAKE2b-256 030d97849edfe9d31f7ffec2139dd54dfc2e06e5311bd1029f6cd454f94ba2ae

See more details on using hashes here.

File details

Details for the file lzip-1.0.0-cp37-cp37m-macosx_11_0_x86_64.whl.

File metadata

  • Download URL: lzip-1.0.0-cp37-cp37m-macosx_11_0_x86_64.whl
  • Upload date:
  • Size: 47.4 kB
  • Tags: CPython 3.7m, macOS 11.0+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.0-cp37-cp37m-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 29d9dae9dd1c2a756f1915fe88a9945f5ffbe1f642a5c8dfec997d8aa4bb7727
MD5 64c62600cc9696ecd5226db2dd8f6fbe
BLAKE2b-256 6ccdf27c2a664ef2446b2a649e141033a71b09c7eaca39c49dbaa12a9721b74c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page