Skip to main content

Fast random access to bzip2 files

Project description


PyPI version Python Version PyPI Platforms Conda Platforms Downloads License Build Status codecov C++17

This module provides an IndexedBzip2File class, which can be used to seek inside bzip2 files without having to decompress them first. Alternatively, you can use this simply as a parallelized bzip2 decoder as a replacement for Python's builtin bz2 module in order to fully utilize all your cores.

On a 12-core processor, this can lead to a speedup of 6 over Python's bz2 module, see this example. Note that without parallelization, indexed_bzip2 is unfortunately slower than Python's bz2 module. Therefore, it is not recommended when neither seeking nor parallelization is used!

The internals are based on an improved version of the bzip2 decoder bzcat from toybox, which was refactored and extended to be able to export and import bzip2 block offsets, seek to block offsets, and to add support for threaded parallel decoding of blocks.

Seeking inside a block is only emulated, so IndexedBzip2File will only speed up seeking when there are more than one block, which should almost always be the cause for archives larger than 1 MB.

Since version 1.2.0, parallel decoding of blocks is supported! However, per default, the older serial implementation is used. To use the parallel implementation you need to specify a parallelization argument other than 1 to IndexedBzip2File, see e.g. this example.


You can simply install it from PyPI:

python3 -m pip install --upgrade pip  # Recommended for newer manylinux wheels
python3 -m pip install indexed_bzip2

Usage Examples

Simple open, seek, read, and close

from indexed_bzip2 import IndexedBzip2File

file = IndexedBzip2File( "example.bz2", parallelization = os.cpu_count() )

# You can now use it like a normal file 123 )
data = 100 )

The first call to seek will ensure that the block offset list is complete and therefore might create them first. Because of this the first call to seek might take a while.

Use with context manager

import os
import indexed_bzip2 as ibz2

with "example.bz2", parallelization = os.cpu_count() ) as file: 123 )
    data = 100 )

Storing and loading the block offset map

The creation of the list of bzip2 blocks can take a while because it has to decode the bzip2 file completely. To avoid this setup when opening a bzip2 file, the block offset list can be exported and imported.

import indexed_bzip2 as ibz2
import pickle

# Calculate and save bzip2 block offsets
file = "example.bz2", parallelization = os.cpu_count() )
block_offsets = file.block_offsets() # can take a while
# block_offsets is a simple dictionary where the keys are the bzip2 block offsets in bits(!)
# and the values are the corresponding offsets in the decoded data in bytes. E.g.:
# block_offsets = {32: 0, 14920: 4796}
with open( "offsets.dat", 'wb' ) as offsets_file:
    pickle.dump( block_offsets, offsets_file )

# Load bzip2 block offsets for fast seeking
with open( "offsets.dat", 'rb' ) as offsets_file:
    block_offsets = pickle.load( offsets_file )
file2 = "example.bz2", parallelization = os.cpu_count() )
file2.set_block_offsets( block_offsets ) # should be fast 123 )
data = 100 )

Open a pure Python file-like object for indexed reading

import io
import os
import indexed_bzip2 as ibz2

with open( "example.bz2", 'rb' ) as file:
    in_memory_file = io.BytesIO( )

with in_memory_file, parallelization = os.cpu_count() ) as file: 123 )
    data = 100 )

Comparison with bz2 module

These are simple timing tests for reading all the contents of a bzip2 file sequentially.

import bz2
import time

with bz2FilePath ) as file:
    t0 = time.time()
    while 4*1024*1024 ):
    t1 = time.time()
    print( f"Decoded file in {t1-t0}s" )

The usage of indexed_bzip2 is slightly different:

import indexed_bzip2
import time

# parallelization = 0 means that it is automatically using all available cores.
with indexed_bzip2.IndexedBzip2File( bz2FilePath, parallelization = 0 ) as file:
    t0 = time.time()
    while 4*1024*1024 ):
    t1 = time.time()
    print( f"Decoded file in {t1-t0}s" )

Results for an AMD Ryzen 3900X 12-core (24 virtual cores) processor and with bz2FilePath=CTU-13-Dataset.tar.bz2, which is a 2GB bz2 compressed archive.

Module Runtime / s
bz2 414
indexed_bzip2 with parallelization = 0 69
indexed_bzip2 with parallelization = 1 566
indexed_bzip2 with parallelization = 2 315
indexed_bzip2 with parallelization = 6 123
indexed_bzip2 with parallelization = 12 79
indexed_bzip2 with parallelization = 24 70
indexed_bzip2 with parallelization = 32 69

The speedup between the bz2 module and indexed_bzip2 with parallelization = 0 is 414/69 = 6. When using only one core, indexed_bzip2 is slower by (566-414)/414 = 37%.

Internal Architecture

The parallelization of the bzip2 decoder and adding support to read from Python file-like objects required a lot of work to design an architecture which works and can be reasoned about. An earlier architecture was discarded because it became to monolithic. That discarded one was able to even work with piped non-seekable input, with which the current parallel architecture does not work with yet. The serial BZ2Reader still exists but is not shown in the class diagram because it is deprecated and will be removed some time in the future after the ParallelBZ2Reader has proven itself. Click here or the image to get a larger image and here to see an SVG version.

Class Diagram for ParallelBZ2Reader

Tracing the decoder

Performance profiling and tracing is done with Score-P for instrumentation and Vampir for visualization. This is one way, you could install Score-P with most of the functionalities on Debian 10.

# Install PAPI
tar -xf papi-5.7.0.tar.gz
cd papi-5.7.0/src
make -j
sudo make install

# Install Dependencies
sudo apt-get install libopenmpi-dev openmpi gcc-8-plugin-dev llvm-dev libclang-dev libunwind-dev libopen-trace-format-dev otf-trace

# Install Score-P (to /opt/scorep)
tar -xf scorep-6.0.tar.gz
cd scorep-6.0
./configure --with-mpi=openmpi --enable-shared
make -j
make install

# Add /opt/scorep to your path variables on shell start
cat <<EOF >> ~/.bashrc
if test -d /opt/scorep; then
    export SCOREP_ROOT=/opt/scorep
    export PATH=$SCOREP_ROOT/bin:$PATH

# Check whether it works
scorep --version
scorep-info config-summary

# Actually do the tracing
cd tools

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

indexed_bzip2-1.3.1.tar.gz (119.1 kB view hashes)

Uploaded source

Built Distributions

indexed_bzip2-1.3.1-cp310-cp310-win_amd64.whl (102.8 kB view hashes)

Uploaded cp310

indexed_bzip2-1.3.1-cp310-cp310-win32.whl (91.8 kB view hashes)

Uploaded cp310

indexed_bzip2-1.3.1-cp39-cp39-win_amd64.whl (102.8 kB view hashes)

Uploaded cp39

indexed_bzip2-1.3.1-cp39-cp39-win32.whl (91.8 kB view hashes)

Uploaded cp39

indexed_bzip2-1.3.1-cp38-cp38-win_amd64.whl (102.9 kB view hashes)

Uploaded cp38

indexed_bzip2-1.3.1-cp38-cp38-win32.whl (91.8 kB view hashes)

Uploaded cp38

indexed_bzip2-1.3.1-cp37-cp37m-win_amd64.whl (102.3 kB view hashes)

Uploaded cp37

indexed_bzip2-1.3.1-cp37-cp37m-win32.whl (91.1 kB view hashes)

Uploaded cp37

indexed_bzip2-1.3.1-cp36-cp36m-win_amd64.whl (102.3 kB view hashes)

Uploaded cp36

indexed_bzip2-1.3.1-cp36-cp36m-win32.whl (91.1 kB view hashes)

Uploaded cp36

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page