LZ4Frame library for Python (via C bindings)
Installing / packaging
# To get from PyPI pip3 install py-lz4framed_ph4 # To only build extension modules inline (e.g. in repository) python3 setup.py build_ext -i # To build & install globally python3 setup.py install # To install locally with pip pip install --upgrade --find-links=. .
- The above as well as all other python3-using commands should also run with v2.7+
- This fork is based on https://github.com/Iotic-Labs/py-lz4framed
This fork has several improvements I needed for my other project.
- Streamed decompression continuation (on reconnect)
- Streamed decompression state clone - checkpointing
- Streamed decompression marshalling - failure recovery, checkpointing
More on improvements
The scenario improvements address is downloading & decompressing large LZ4 data stream on the fly (hundreds of GBs). If the download stream is interrupted the original decompressor had no way to resume the decompression where it stopped.
The main motivation is to recover from aforementioned interruptions. Decompressor object now supports changing of the file-like object that is read from. If input socket stream went down we can re-connect and continue from the position it stopped. More in test test_decompressor_fp_continuation.
If the processing logic is more complex you can use clone_decompression_context to clone decompressor context (the whole decompression state) and revert to this checkpoint if something breaks. More in test test_decompressor_fp_clone.
In order to recover also from program crashes you can marshal / serialize the decompressor context to the (byte) string which can be later unmarshalled / deserialized and continue from that point. Marshalled state can be stored e.g., to a file. More in test test_decompressor_fp_marshalling.
Random access archive
Situation: 800 GB LZ4 encrypted file. You want random access the file so it can be map/reduced or processed in parallel from different offsets.
Marshalled decompressor state takes only the required amount of memory. If the state dump is performed on the block boundaries (i.e., when the size hint from the previous call was provided by the input stream) the marhsalled size would be only 184 B, in the best case scenario, 66 kB in the worse case - when LZ4 file is using linked mode.
Anyway, when state marshalling returns this small state the application can build a meta file, the mapping: position in the input stream -> decompressor context. With this meta file a new decompressor can jump to the particular checkpoint.
import lz4framed compressed = lz4framed.compress(b'binary data') uncompressed = lz4framed.decompress(compressed)
To iteratively compress (to a file or e.g. BytesIO instance):
with open('myFile', 'wb') as f: # Context automatically finalises frame on completion, unless an exception occurs with Compressor(f) as c: try: while (...): c.update(moreData) except Lz4FramedNoDataError: pass
To decompress from a file-like object:
with open('myFile', 'rb') as f: try: for chunk in Decompressor(f): decoded.append(chunk) except Lz4FramedNoDataError: # Compress frame data incomplete - error case ...
See also lz4framed/__main__.py for example usage.
import lz4framed print(lz4framed.__version__, lz4framed.LZ4_VERSION, lz4framed.LZ4F_VERSION) help(lz4framed)
python3 -mlz4framed USAGE: lz4framed (compress|decompress) (INFILE|-) [OUTFILE] (De)compresses an lz4 frame. Input is read from INFILE unless set to '-', in which case stdin is used. If OUTFILE is not specified, output goes to stdout.
python3 -m unittest discover -v .
The only existing lz4-frame interoperable implementation I was aware of at the time of writing (lz4tools) had the following limitations:
- Incomplete implementation in terms of e.g. reference & memory leaks on failure
- Lack of unit tests
- Not thread safe
- Does not release GIL during low level (de)compression operations
- Did not address the requirements for an external project