Skip to main content

Lzip (.lz) archives compression and decompression with buffers and URLs support

Project description

lzip is a Python wrapper for lzlib (https://www.nongnu.org/lzip/lzlib.html) to encode and decode Lzip archives (https://www.nongnu.org/lzip/).

This package is compatible with arbitrary byte sequences but provides features to facilitate interoperability with Numpy's frombuffer and tobytes functions. Decoding and encoding can be performed in chunks, enabling the decompression, processing and compression of files that do not fit in RAM. URLs can be used as well to download, decompress and process the chunks of a remote Lzip archive in one go.

pip3 install lzip

Quickstart

Compress

Compress an in-memory buffer and write it to a file:

import lzip

lzip.compress_to_file('/path/to/output.lz', b'data to compress')

Compress multiple chunks and write the result to a single file (useful to avoid large in-memory buffers):

import lzip

with lzip.FileEncoder('/path/to/output.lz') as encoder:
    encoder.compress(b'data')
    encoder.compress(b' to')
    encoder.compress(b' compress')

Use FileEncoder without context management (with):

import lzip

encoder = lzip.FileEncoder('/path/to/output.lz')
encoder.compress(b'data')
encoder.compress(b' to')
encoder.compress(b' compress')
encoder.close()

Compress a Numpy array and write the result to a file:

import lzip
import numpy

values = numpy.arange(100, dtype='<u4')

lzip.compress_to_file('/path/to/output.lz', values.tobytes())

lzip can use different compression levels. See the documentation below for details.

Decompress

Read and decompress a file to an in-memory buffer:

import lzip

buffer = lzip.decompress_file('/path/to/input.lz')

Read and decompress a file one chunk at a time (useful for large files):

import lzip

for chunk in lzip.decompress_file_iter('/path/to/input.lz'):
    # chunk is a bytes object

Read and decompress a file one chunk at a time, and ensure that each chunk contains a number of bytes that is a multiple of word_size (useful to parse numpy arrays with a known dtype):

import lzip
import numpy

for chunk in lzip.decompress_file_iter('/path/to/input.lz', word_size=4):
    values = numpy.frombuffer(chunk, dtype='<u4')

Download and decompress data from a URL:

import lzip

# option 1: store the whole decompressed file in a single buffer
buffer = lzip.decompress_url('http://download.savannah.gnu.org/releases/lzip/lzip-1.22.tar.lz')

# option 2: iterate over the decompressed file in small chunks
for chunk in lzip.decompress_url_iter('http://download.savannah.gnu.org/releases/lzip/lzip-1.22.tar.lz'):
    # chunk is a bytes object

lzip can also decompress data from an in-memory buffer. See the documentation below for details.

Documentation

The present package contains two libraries. lzip deals with high-level operations (open and close files, download remote data, change default arguments...) whereas lzip_extension focuses on efficiently compressing and decompressing in-memory byte buffers.

lzip uses lzip_extension internally. The latter should only be used in advanced scenarios where fine buffer control is required.

lzip

FileEncoder

class FileEncoder:
    def __init__(self, path, level=6, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and write the compressed bytes to a file
        - path is the output file name, it must be a path-like object such as a string or a pathlib path
        - level must be either an integer in [0, 9] or a tuple (directory_size, match_length)
          0 is the fastest compression level, 9 is the slowest
          see https://www.nongnu.org/lzip/manual/lzip_manual.html for the mapping between
          integer levels, directory sizes and match lengths
        - member_size can be used to change the compressed file's maximum member size
          see the Lzip manual for details on the tradeoffs incurred by this value
        """

    def compress(self, buffer):
        """
        Encode a buffer and write the compressed bytes into the file
        - buffer must be a byte-like object, such as bytes or a bytearray
        """

    def close(self):
        """
        Flush the encoder contents and close the file

        compress must not be called after calling close
        Failing to call close results in a corrupted encoded file
        """

FileEncoder can be used as a context manager (with FileEncoder(...) as encoder). close is called automatically in this case.

BufferEncoder

class BufferEncoder:
    def __init__(self, level=6, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and return the compressed bytes as in-memory buffers
        - level: see FileEncoder
        - member_size: see FileEncoder
        """

    def compress(self, buffer):
        """
        Encode a buffer and return the compressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (non-compressed) buffers and
        output (conpressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b'') even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a bytes object

        compress must not be called after calling finish
        Failing to call finish results in corrupted encoded buffers
        """

RemainingBytesError

class RemainingBytesError(Exception):
    def __init__(self, word_size, buffer):
        """
        Raised by decompress_* functions if the total number of bytes is not a multiple of word_size
        The remaining bytes are stored in self.buffer
        See 'Word size and remaining bytes' for details
        """

compress_to_buffer

def compress_to_buffer(buffer, level=6, member_size=(1 << 51)):
    """
    Encode a single buffer and return the compressed bytes as an in-memory buffer
    - buffer must be a byte-like object, such as bytes or a bytearray
    - level: see FileEncoder
    - member_size: see FileEncoder
    This function returns a bytes object
    """

compress_to_file

def compress_to_file(path, buffer, level=6, member_size=(1 << 51)):
    """
    Encode a single buffer and write the compressed bytes into a file
    - path is the output file name, it must be a path-like object such as a string or a pathlib path
    - buffer must be a byte-like object, such as bytes or a bytearray
    - level: see FileEncoder
    - member_size: see FileEncoder
    """

decompress_buffer

def decompress_buffer(buffer, word_size=1):
    """
    Decode a single buffer and return the decompressed bytes as an in-memory buffer
    - buffer must be a byte-like object, such as bytes or a bytearray
    - word_size: see 'Word size and remaining bytes'
    This function returns a bytes object
    """

decompress_buffer_iter

def decompress_buffer_iter(buffer, word_size=1):
    """
    Decode a single buffer and return an in-memory buffer iterator
    - buffer must be a byte-like object, such as bytes or a bytearray
    - word_size: see 'Word size and remaining bytes'
    This function returns a bytes object iterator
    """

decompress_file

def decompress_file(path, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file and return the decompressed bytes as an in-memory buffer
    - path is the input file name, it must be a path-like object such as a string or a pathlib path
    - word_size: see 'Word size and remaining bytes'
    - chunk_size: the number of bytes to read from the file at once
      large values increase memory usage but very small values impede performance
    This function returns a bytes object
    """

decompress_file_iter

def decompress_file_iter(path, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file and return an in-memory buffer iterator
    - path is the input file name, it must be a path-like object such as a string or a pathlib path
    - word_size: see 'Word size and remaining bytes'
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

decompress_file_like

def decompress_file_like(file_like, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file-like object and return the decompressed bytes as an in-memory buffer
    - file_like is a file-like object, such as a file or a HTTP response
    - word_size: see 'Word size and remaining bytes'
    - chunk_size: see decompress_file
    This function returns a bytes object
    """

decompress_file_like_iter

def decompress_file_like_iter(file_like, word_size=1, chunk_size=(1 << 16)):
    """
    Read and decode a file-like object and return an in-memory buffer iterator
    - file_like is a file-like object, such as a file or a HTTP response
    - word_size: see 'Word size and remaining bytes'
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

decompress_url

def decompress_url(
    url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, cafile=None, capath=None, context=None,
    word_size=1,
    chunk_size=(1 << 16)):
    """
    Download and decode data from a URL and return the decompressed bytes as an in-memory buffer
    - url must be a string or a urllib.Request object
    - data, timeout, cafile, capath and context are passed to urllib.request.urlopen
      see https://docs.python.org/3/library/urllib.request.html for details
    - word_size: see 'Word size and remaining bytes'
    - chunk_size: see decompress_file
    This function returns a bytes object
    """

decompress_url_iter

def decompress_url_iter(
    url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, cafile=None, capath=None, context=None,
    word_size=1,
    chunk_size=(1 << 16)):
    """
    Download and decode data from a URL and return an in-memory buffer iterator
    - url must be a string or a urllib.Request object
    - data, timeout, cafile, capath and context are passed to urllib.request.urlopen
      see https://docs.python.org/3/library/urllib.request.html for details
    - word_size: see 'Word size and remaining bytes'
    - chunk_size: see decompress_file
    This function returns a bytes object iterator
    """

lzip_extension

Even though lzip_extension behaves like a conventional Python module, it is written in C++. To keep the implementation simple, only positional arguments are supported (keyword arguments do not work). The Python classes documented below are equivalent to the classes exported by this low-level implementation.

You can use lzip_extension by importing it like any other module. lzip.py uses it extensively.

Decoder

class Decoder:
    def __init__(self, word_size=1):
        """
        Decode sequential byte buffers and return the decompressed bytes as in-memory buffers
        - word_size is a non-zero positive integer
          all the output buffers contain a number of bytes that is a multiple of word_size
        """

    def decompress(self, buffer):
        """
        Decode a buffer and return the decompressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (compressed) buffers and
        output (decompressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b'') even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a tuple (buffer, remaining_bytes)
          Both buffer and remaining_bytes and bytes objects
          buffer should be empty (b'') unless the file was truncated
          remaining_bytes is empty (b'') unless the total number of bytes decoded
          is not a multiple of word_size

        decompress must not be called after calling finish
        Failing to call finish delays garbage collection which can be an issue
        when decoding many files in a row, and prevents the algorithm from detecting
        remaining bytes (if the size is not a multiple of word_size)
        """

Encoder

class Encoder:
    def __init__(self, dictionary_size=(1 << 23), match_len_limit=36, member_size=(1 << 51)):
        """
        Encode sequential byte buffers and return the compressed bytes as in-memory buffers
        - dictionary_size is an integer in the range [(1 << 12), (1 << 29)]
        - match_len_limit is an integer in the range [5, 273]
        - member_size is an integer in the range [(1 << 12), (1 << 51)]
        """

    def compress(self, buffer):
        """
        Encode a buffer and return the compressed bytes as an in-memory buffer
        - buffer must be a byte-like object, such as bytes or a bytearray
        This function returns a bytes object

        The compression algorithm may decide to buffer part or all of the data,
        hence the relationship between input (decompressed) buffers and
        output (compressed) buffers is not one-to-one
        In particular, the returned buffer can be empty (b'') even if the input buffer is not
        """

    def finish(self):
        """
        Flush the encoder contents
        This function returns a bytes object

        compress must not be called after calling finish
        Failing to call finish results in corrupted encoded buffers
        """

Compare options

The script compare_options.py uses the lzip library to compare the compression ratio of different pairs (dictionary_size, match_len_limit). It runs multiple compressions in parallel and does not store the compressed bytes. About 3 GB of RAM are required to run the script. Processing time depends on the file size and the number of processors on the machine.

The script requires matplotlib (pip3 install matplotlib) to display the results.

python3 compare_options /path/to/uncompressed/file [--chunk-size=65536]

Word size and remaining bytes

Decoding functions take an optional parameter word_size that defaults to 1. Decoded buffers are guaranteed to contain a number of bytes that is a multiple of word_size to facilitate fixed-sized words parsing (for example numpy.frombytes). If the total size of the uncompressed archive is not a multiple of word_size, lzip.RemainingBytesError is raised after iterating over the last chunk. The raised exception provides access to the remaining bytes.

Non-iter decoding functions do not provide access to the decoded buffers if the total size is not a multiple of word_size (only the remaining bytes).

The following example decodes a file and converts the decoded bytes to 4-bytes unsigned integers:

import lzip
import numpy

try:
    for chunk in lzip.decompress_file_iter('/path/to/archive.lz', 4):
        values = numpy.frombuffer(chunk, dtype='<u4')
except lzip.RemainingBytesError as error:
    # this block is executed only if the number of bytes in '/path/to/archive.lz'
    # is not a multiple of 4 (after decompression)
    print(error) # prints 'The total number of bytes is not a multiple of 4 (k remaining)'
                 # where k is in [1, 3]
    # error.buffer is a bytes object and contains the k remaining bytes

Default parameters

The default parameters in lzip functions are not constants, despite what is presented in the documentation. The actual implementation looks like this:

def some_function(some_parameter=None):
    if some_parameter is None:
        some_paramter = some_paramter_default_value

This approach makes it possible to change default values at the module level at any time. For example:

import lzip

lzip.compress_to_file('/path/to/output0.lz', b'data to compress') # encoded at level 6 (default)

lzip.default_level = 9

lzip.compress_to_file('/path/to/output1.lz', b'data to compress') # encoded at level 9
lzip.compress_to_file('/path/to/output2.lz', b'data to compress') # encoded at level 9

lzip_default_level = 0

lzip.compress_to_file('/path/to/output1.lz', b'data to compress') # encoded at level 0

lzip exports the following default default values:

default_level = 6
default_word_size = 1
default_chunk_size = 1 << 16
default_member_size = 1 << 51

Publish

  1. Bump the version number in setup.py.

  2. Install Cubuzoa in a different directory (https://github.com/neuromorphicsystems/cubuzoa) to build pre-compiled versions for all major operating systems. Cubuzoa depends on VirtualBox (with its extension pack) and requires about 75 GB of free disk space.

cd cubuzoa
python3 cubuzoa.py provision
python3 cubuzoa.py build /path/to/event_stream
  1. Install twine
pip3 install twine
  1. Upload the compiled wheels and the source code to PyPI:
python3 setup.py sdist --dist-dir wheels
python3 -m twine upload wheels/*

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lzip-1.0.1.tar.gz (15.7 kB view details)

Uploaded Source

Built Distributions

lzip-1.0.1-cp39-cp39-win_amd64.whl (46.8 kB view details)

Uploaded CPython 3.9 Windows x86-64

lzip-1.0.1-cp39-cp39-win32.whl (38.8 kB view details)

Uploaded CPython 3.9 Windows x86

lzip-1.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (73.1 kB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

lzip-1.0.1-cp39-cp39-macosx_11_0_x86_64.whl (47.4 kB view details)

Uploaded CPython 3.9 macOS 11.0+ x86-64

lzip-1.0.1-cp38-cp38-win_amd64.whl (46.9 kB view details)

Uploaded CPython 3.8 Windows x86-64

lzip-1.0.1-cp38-cp38-win32.whl (38.8 kB view details)

Uploaded CPython 3.8 Windows x86

lzip-1.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (73.1 kB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

lzip-1.0.1-cp38-cp38-macosx_11_0_x86_64.whl (47.4 kB view details)

Uploaded CPython 3.8 macOS 11.0+ x86-64

lzip-1.0.1-cp37-cp37m-win_amd64.whl (46.8 kB view details)

Uploaded CPython 3.7m Windows x86-64

lzip-1.0.1-cp37-cp37m-win32.whl (38.8 kB view details)

Uploaded CPython 3.7m Windows x86

lzip-1.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (73.1 kB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

lzip-1.0.1-cp37-cp37m-macosx_11_0_x86_64.whl (47.4 kB view details)

Uploaded CPython 3.7m macOS 11.0+ x86-64

File details

Details for the file lzip-1.0.1.tar.gz.

File metadata

  • Download URL: lzip-1.0.1.tar.gz
  • Upload date:
  • Size: 15.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.1.tar.gz
Algorithm Hash digest
SHA256 2ffdc3136a34ce9766e0f879327cc29432e81645b4f17e876d72f2e859553469
MD5 5dca678de822736dacda93d01d01d026
BLAKE2b-256 1cac722485a9f7b2c1d039a6cbf60c41e1bb9196cfaa0fcfc4edcea7fc5a43bf

See more details on using hashes here.

File details

Details for the file lzip-1.0.1-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: lzip-1.0.1-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 46.8 kB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.1-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 d55630df88e0533f47c8f5009de5ff7363f3c8967e12ef8f7e89d1628176ba25
MD5 2840ab1c633146c0a97f3fb92a6b024b
BLAKE2b-256 3b550a0da7f764e960081c64925708e4811586b4e9e80298c80db156ffdad4b2

See more details on using hashes here.

File details

Details for the file lzip-1.0.1-cp39-cp39-win32.whl.

File metadata

  • Download URL: lzip-1.0.1-cp39-cp39-win32.whl
  • Upload date:
  • Size: 38.8 kB
  • Tags: CPython 3.9, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.1-cp39-cp39-win32.whl
Algorithm Hash digest
SHA256 c6c3f11f54acea6f6a6864ef4ab6f7a316e9d477c20ee3e5c0eab8846c4990be
MD5 97ee2edd641f990ea5266338b5944431
BLAKE2b-256 653825d76c3d69e9a95beb35535dce274939bf4b4b21701f09760e5db13ecd24

See more details on using hashes here.

File details

Details for the file lzip-1.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d088ed59ec5432cd7fe8ea8a27e7f1e26cc88951963d0ed0c8074a1f139f3b45
MD5 14f6c38571407d16aa8239392edca8cd
BLAKE2b-256 e69f0358867568692423784f1c14155f52f1e59b13773e75f588f5bd1c01e6bd

See more details on using hashes here.

File details

Details for the file lzip-1.0.1-cp39-cp39-macosx_11_0_x86_64.whl.

File metadata

  • Download URL: lzip-1.0.1-cp39-cp39-macosx_11_0_x86_64.whl
  • Upload date:
  • Size: 47.4 kB
  • Tags: CPython 3.9, macOS 11.0+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.1-cp39-cp39-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 03c075e758589bf035865fd7e11f8c39ce09e514fc2ccb27b92dd9f2cbafa978
MD5 efa00f7283f52bc9c321d7f79dd8217a
BLAKE2b-256 ba73b63b81825d3e6a6294c73733d1a28c80793a55938d5e7abe1ceba87505e5

See more details on using hashes here.

File details

Details for the file lzip-1.0.1-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: lzip-1.0.1-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 46.9 kB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.1-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 594bc51bf2e5c5aa0b882bd4c18a6f8cbd33f2f7a3d7021c8c74fd74516d7fc5
MD5 6f0b68062b5a5def408ae7d626033997
BLAKE2b-256 7f67d75f3da71cf593e54eafa3a8c3b1256b71a32ea5c6c05238a4b3f3d5e57e

See more details on using hashes here.

File details

Details for the file lzip-1.0.1-cp38-cp38-win32.whl.

File metadata

  • Download URL: lzip-1.0.1-cp38-cp38-win32.whl
  • Upload date:
  • Size: 38.8 kB
  • Tags: CPython 3.8, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.1-cp38-cp38-win32.whl
Algorithm Hash digest
SHA256 053efed74a4d87bdafc7341fb92e35e132b642fd777b7a8531778cb90a45b0b2
MD5 ad9e77a6c9f536324d2d667eb1ca2dad
BLAKE2b-256 46add58b07080273663dd01c99f45750be13868d4e6d9549fb346cff32daf76a

See more details on using hashes here.

File details

Details for the file lzip-1.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c4a6272d1b98f8dc5874bf146cb6cc86fc08d394e1d55b42a0a428bd52e50ce2
MD5 aa61bc1f05f9856ee4c5ff582a2e7d4b
BLAKE2b-256 1fc56d5a9830b8092eea25cbce4b2bcdca2b4c86dd0f8927f0b0cef9dd354ecd

See more details on using hashes here.

File details

Details for the file lzip-1.0.1-cp38-cp38-macosx_11_0_x86_64.whl.

File metadata

  • Download URL: lzip-1.0.1-cp38-cp38-macosx_11_0_x86_64.whl
  • Upload date:
  • Size: 47.4 kB
  • Tags: CPython 3.8, macOS 11.0+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.1-cp38-cp38-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 3aa0816382d9b4d8740b748b2f4fac960235004f994b07126131215b1cdcb189
MD5 d5924c3b3b4b2312ffd4bf6c447871e3
BLAKE2b-256 4c2fe1ed183c7a1a3f724efbe0b9e301e72bbfe91484d206f76b9c5bf479ed94

See more details on using hashes here.

File details

Details for the file lzip-1.0.1-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: lzip-1.0.1-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 46.8 kB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.1-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 805b12543483824b8e5474e41c9f5519bf3d8a37ed6f6d5d6d5244c5842c797e
MD5 34f184a3043a58dac25e7ab25dfefe51
BLAKE2b-256 04d8fd6097be9565b6ef7c7f193d6c19502a6c09cf347de8c0e02b9a3fd2a1a9

See more details on using hashes here.

File details

Details for the file lzip-1.0.1-cp37-cp37m-win32.whl.

File metadata

  • Download URL: lzip-1.0.1-cp37-cp37m-win32.whl
  • Upload date:
  • Size: 38.8 kB
  • Tags: CPython 3.7m, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.1-cp37-cp37m-win32.whl
Algorithm Hash digest
SHA256 b71b8ea09f9dbfaf4679276dce8f6f9eb0c8d57ddeb8ffafd802752985fa53fe
MD5 e209abbd727eb8d3214e2523fcb8cba7
BLAKE2b-256 931fc5713b7e8723cb2d92c6d7cb0d3eca22c2e361d5fedf2f8698f0d933a907

See more details on using hashes here.

File details

Details for the file lzip-1.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lzip-1.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 fa85522a38c2bdba8290eb5f1dc4b616773b957f570db7b4afec50d5cf91193c
MD5 715730dc6f80ff82f978ae38354a8b92
BLAKE2b-256 d0b2454566158b13b164915ad0c4d7b60947b7152e16c2b2fa22b31342af8ad8

See more details on using hashes here.

File details

Details for the file lzip-1.0.1-cp37-cp37m-macosx_11_0_x86_64.whl.

File metadata

  • Download URL: lzip-1.0.1-cp37-cp37m-macosx_11_0_x86_64.whl
  • Upload date:
  • Size: 47.4 kB
  • Tags: CPython 3.7m, macOS 11.0+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for lzip-1.0.1-cp37-cp37m-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 ffe01e7804b5a09f3dc3851298f97196bb625640dd0ab7811b6e88efb0aa5f73
MD5 1dae0a5c60c7ba8d3c446e7872db0d8b
BLAKE2b-256 0bf656b21ade2cda49f90a8fe382f1cbf2cdd524e313f0de0b79988aa9084562

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page