A format for storing a sequence of byte-array records
Project description
Säckli
This is a friendly fork of bagz.
Additions so far:
- Merge some PRs such as S3 support PR by @KefanXIAO and compile fixes.
- Add
access_patternandcache_policyreader hints:- On POSIX filesystem, this can add
mmaphints or usepreadto optimize for random access and larger-than-RAM data.
- On POSIX filesystem, this can add
- Make it compatible to Python versions past 3.13.
- Make it compatible with free-threading (nogil) Python.
- Add CI, stress-tests and automatic wheel releases to PyPI.
Versioning of this fork is detached from the original bagz library at the point it was forked (v0.2.0).
Overview
Säckli is a format for storing a sequence of byte-array records. It supports per-record compression and fast index-based lookup. All indexing is zero based.
Installation
The recommended installation on Linux is via the pre-built wheels on PyPI:
uv pip install sackli
If you want to build locally to work on this, just uv pip install ..
However, building can be slow because of GCS and S3 support;
to skip both of these dependencies for much faster builds, you can do:
CMAKE_ARGS="-DSACKLI_ENABLE_GCS=OFF -DSACKLI_ENABLE_S3=OFF" uv pip install .
Python API
Python Reader
Reader for reading a single or sharded Säckli file-set.
from collections.abc import Sequence, Iterable
import sackli
import numpy as np
# Säckli Readers support random access. The order of elements within a Säckli
# file is the order in which they are written. Records are returned as `bytes`
# objects.
data = sackli.Reader('/path/to/data.bagz')
# Säckli Readers can be configured like this - here we require that the file was
# written with separate limits.
data_separate_limits = sackli.Reader('/path/to/data.bagz', sackli.Reader.Options(
limits_placement=sackli.LimitsPlacement.SEPARATE,
))
# Säckli Readers are Sequences and support slicing, iterating, etc.
assert isinstance(data, Sequence)
# Säckli Readers have a length.
assert len(data) > 10
# Can access record by row-index.
fifth_value: bytes = data[5]
# Can slice.
data_from_5: sackli.Reader = data[5:]
# Slices are still Readers.
assert isinstance(data_from_5, sackli.Reader)
assert data_from_5[0] == fifth_value
# Can access records by multiple row-indices.
fourth, second, tenth = data.read_indices([4, 2, 10])
assert fourth == data[4]
assert second == data[2]
assert tenth == data[10]
# Can iterate records.
for record in data:
do_something_else(record)
# Can read all records. This eager version can be faster than iteration.
all_records = data.read()
# Can iterate sub-range of records.
for record in data[4:9]:
do_something_else(record)
# Can read a sub-range of records. This eager form can be faster than
# iteration.
sub_range = data[4:9].read()
# Can use an infinite iterator as source of indices. (Reads ahead in parallel.)
def my_generator(size: int) -> Iterable[int]:
rng = np.random.default_rng(42)
while True:
yield rng.integers(size).item()
data_iter: Iterable[bytes] = data.read_indices_iter(my_generator(len(data)))
for i in range(10):
random_item: bytes = next(data_iter)
Python Reader - Index and MultiIndex
You can use Index to find the first index of a record and MultiIndex to find
all instances of an item.
keys = sackli.Reader('/path/to/keys.bag')
# Get the index of the first occurrence of key.
index = sackli.Index(keys)
key_index: int = index[b'example_key']
# Get all occurrences of key.
multi_index = sackli.MultiIndex(keys)
all_indices: list[int] = multi_index[b'example_key']
Python Writer
For writing a single Säckli file.
Example:
import sackli
# Compression is selected based on the file extension:
# `.bagz` will use Zstandard compression with default settings.
# `.bag` will use no compression.
with sackli.Writer('/path/to/data.bagz') as writer:
for d in generate_records():
writer.write(d)
# Adjust compression level explicitly.
# Note this will no longer use the extension to detemine whether to compress.
with sackli.Writer(
'/path/to/data.bagz',
sackli.Writer.Options(
compression=sackli.CompressionZstd(level=3)
),
) as writer:
for d in generate_records():
writer.write(d)
Options
Reader Options
sackli.Reader.Options has these optional arguments.
compression: Can be one of:sackli.CompressionAutoDetect(): Default - Uses extension whether to compress. (.bagz- Compressed (ZStandard),.bag- Uncompressed)sackli.CompressionNone(): Records are not decompressed.sackli.CompressionZstd(): Records are decompressed using Zstandard.
limits_placement: Can be one of:sackli.LimitsPlacement.TAIL: Default- Reads limits from a tail of file.sackli.LimitsPlacement.SEPARATE: Reads limits from a separate file.
limits_storage: Can be one of:sackli.LimitsStorage.ON_DISK: Default - Reads limits from disk for each read.sackli.LimitsStorage.IN_MEMORY: Reads all limits from disk in one go.
access_pattern: Can be one of:sackli.AccessPattern.SYSTEM: Default - no specific hint to the OS.sackli.AccessPattern.RANDOM: Hints that you read entries in random order.sackli.AccessPattern.SEQUENTIAL: Hints that you read entries roughly sequentially.
cache_policy: Can be one of:sackli.CachePolicy.SYSTEM: Default - no specific hint to the OS.sackli.CachePolicy.DROP_AFTER_READ: Reads data in such a way that the OS is unlikely to hold any of it in cache. For POSIX filesystems, this means usingpreadwith specific flags. This is more efficient when you read more data than your RAM before doing any repeats (ie epoch > RAM).
max_parallelism: Default number of threads when reading many records.sharding_layout: Can be one of:
access_pattern and cache_policy are currently interpreted only for local
POSIX files and influence OS-level behaviour on page cache and cache lines.
For tail-formatted files, cache_policy=DROP_AFTER_READ opens a second POSIX
read handle to the same file so limits metadata reads keep the default cache
policy.
Writer Options
sackli.Writer.Options has these optional arguments.
compression: Can be one of:sackli.CompressionAutoDetect(): Default - Uses extension whether to compress. (.bagz- Compressed (Zstandard),.bag- Uncompressed)sackli.CompressionNone(): Records are not compressed.sackli.CompressionZstd(level = 3): Records are compressed using Zstandard the level of the compression can be specified.
limits_placement: Can be one of:sackli.LimitsPlacement.TAIL: Default - Writes limits to a tail of file.sackli.LimitsPlacement.SEPARATE: Writes limits to a separate file.
Apache Beam Support
Säckli also provides Apache Beam connectors for reading and writing Säckli files in Beam pipelines.
Ensure you have Apache Beam installed.
uv pip install apache_beam
Säckli Source
import apache_beam as beam
from sackli.beam import sacklio
import tensorflow as tf
with beam.Pipeline() as pipeline:
examples = (
pipeline
| 'ReadData' >> sacklio.ReadFromSackli('/path/to/your/data@*.bagz')
| 'Decode' >> beam.Map(tf.train.Example.FromString)
)
# Continue your pipeline.
Säckli Sink
from sackli.beam import sacklio
import tensorflow as tf
def create_tf_example(data):
# Replace with your actual feature creation logic.
feature = {
'data': tf.train.Feature(bytes_list=tf.train.BytesList(value=[data])),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
with beam.Pipeline() as pipeline:
data = [b'record1', b'record2', b'record3']
examples = (
pipeline
| 'CreateData' >> beam.Create(data)
| 'Encode' >> beam.Map(lambda x: create_tf_example(x).SerializeToString())
| 'WriteData' >> sacklio.WriteToSackli('/path/to/output/data@*.bagz')
)
GCS Support
Säckli supports Posix file-systems and Google Cloud Storage (GCS). If you have
files on GCS you can access them with using the prefix /gs: to the path. These
examples assume you have gcloud CLI installed.
From the shell:
gcloud config set project your-project-name
gcloud auth application-default login
Then use the 'gs:' file-system prefix.
import pathlib
import sackli
# (This may freeze if you have not configured the project.)
reader = sackli.Reader('gs://your-bucket-name/your-file.bagz')
# Path supports a leading slash to work well with pathlib.
bucket = pathlib.Path('/gs://your-bucket-name')
reader = sackli.Reader(bucket / 'your-file.bagz')
Sharding
An ordered collection of Säckli-formatted files ("shards") may be opened together
and indexed via a single global-index. The global-index is mapped to a
shard and an index within that shard (shard-index) in one of two ways:
-
Concatenated (default). Indexing is equivalent to the records in each Säckli-formatted shard being concatenated into a single sequence of records.
Example:
When opening four Säckli-formatted files with sizes
[8, 4, 0, 5], theglobal-indexwith range[0, 17)(shown as the table entries) maps toshardandshard-indexlike this:| shard-index shard | 0 1 2 3 4 5 6 7 -------------- | ----------------------- 00000-of-00004 | 0 1 2 3 4 5 6 7 00001-of-00004 | 8 9 10 11 00002-of-00004 | 00003-of-00004 | 12 13 14 15 16Mappings
global-indexshardshard-index000000-of-000040100000-of-000041200000-of-000042... ... ... 800001-of-000040900001-of-000041... ... ... 1500003-of-0000431600003-of-000044 -
Interleaved where the global-index is interleaved in a round-robin manner across all the shards.
Example:
When opening three Säckli-formatted files with sizes
[6, 6, 5], theglobal-indexwith range[0, 17)(shown as the table entries) maps toshardandshard-indexlike this:| shard-index shard | 0 1 2 3 4 5 -------------- | ----------------- 00000-of-00003 | 0 3 6 9 12 15 00001-of-00003 | 1 4 7 10 13 16 00002-of-00003 | 2 5 8 11 14Mappings
global-indexshardshard-index000000-of-000030100001-of-000030200002-of-000030... ... ... 600000-of-000032700001-of-000032800002-of-000032... ... ... 1500000-of-0000351600001-of-000035
Säckli file format
The Säckli file format has two parts: the records section and the limits
section.
- The
recordssection consists of the concatenation of all (possibly compressed) records. (There are no additional bytes inside or between records, and records are not aligned in any way.) - The
limitssection is a dense array of the end-offsets of each record in order, encoded in little-endian 64-bit unsigned integers.
These can be stored with tail-limits in one file, where the limits sections
is appended to the record section or separate-limits where they are stored
in separate files.
Tail-limits example
Given Säckli file formatted file with the following 3 uncompressed records:
| Records |
|---|
abcdef |
123 |
catcat |
The raw bytes of the Säckli file formated file corresponding to the records above:
0x61 a 0x62 b 0x63 c 0x64 d 0x65 e 0x66 f
0x31 1 0x32 2 0x33 3
0x63 c 0x61 a 0x74 t 0x63 c 0x61 a 0x74 t
0x06 0x00 0x00 0x00 0x00 0x00 0x00 0x00 # 6 byte offset
0x09 0x00 0x00 0x00 0x00 0x00 0x00 0x00 # 9 byte offset
0x0f 0x00 0x00 0x00 0x00 0x00 0x00 0x00 # 15 byte offset
The last 8 bytes represent the end-offset of the last record. This is also the
start of the limits section. Therefore reading the last 8 bytes will directly
tell you the offset of the records/limits boundary.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sackli-0.2.2.tar.gz.
File metadata
- Download URL: sackli-0.2.2.tar.gz
- Upload date:
- Size: 84.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
92ed1197a0757884bde5a9ac9da49ecc2e5889951cf113b19fbf2d93255d4fee
|
|
| MD5 |
7492b7420c5a273941f97669ae69a9d0
|
|
| BLAKE2b-256 |
bdf79a57a5a4baa3ae3f571a1cd81f02730a361b67ab9378ca0844e362548d85
|
Provenance
The following attestation bundles were made for sackli-0.2.2.tar.gz:
Publisher:
publish.yml on lucasb-eyer/sackli
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sackli-0.2.2.tar.gz -
Subject digest:
92ed1197a0757884bde5a9ac9da49ecc2e5889951cf113b19fbf2d93255d4fee - Sigstore transparency entry: 1058679390
- Sigstore integration time:
-
Permalink:
lucasb-eyer/sackli@898a4fce8dd67c98916288d8a8facb1f38b060f5 -
Branch / Tag:
refs/tags/v0.2.2 - Owner: https://github.com/lucasb-eyer
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@898a4fce8dd67c98916288d8a8facb1f38b060f5 -
Trigger Event:
push
-
Statement type:
File details
Details for the file sackli-0.2.2-cp313-cp313-manylinux_2_28_x86_64.whl.
File metadata
- Download URL: sackli-0.2.2-cp313-cp313-manylinux_2_28_x86_64.whl
- Upload date:
- Size: 11.7 MB
- Tags: CPython 3.13, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
445870a014dec9c8cc2e5dfa09a658025570134d6854fed5b641e4f860e7ecea
|
|
| MD5 |
0ffb870fd7976ff5bdedd89836142d96
|
|
| BLAKE2b-256 |
9e30faebb75fa43e36dbf1cfb64d199cd87e1fdd8355da03a8e834bc2ec7dadf
|
Provenance
The following attestation bundles were made for sackli-0.2.2-cp313-cp313-manylinux_2_28_x86_64.whl:
Publisher:
publish.yml on lucasb-eyer/sackli
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sackli-0.2.2-cp313-cp313-manylinux_2_28_x86_64.whl -
Subject digest:
445870a014dec9c8cc2e5dfa09a658025570134d6854fed5b641e4f860e7ecea - Sigstore transparency entry: 1058679395
- Sigstore integration time:
-
Permalink:
lucasb-eyer/sackli@898a4fce8dd67c98916288d8a8facb1f38b060f5 -
Branch / Tag:
refs/tags/v0.2.2 - Owner: https://github.com/lucasb-eyer
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@898a4fce8dd67c98916288d8a8facb1f38b060f5 -
Trigger Event:
push
-
Statement type: