Skip to main content

Fast format for datasets.

Project description

PyPI

Granular: Fast format for datasets

Granular is a library for reading and writing multimodal datasets. Each dataset is a collection of linked files of the bag file format, a simple seekable container structure.

Features

  • 🚀 Performance: Minimal overhead for maximum read and write throughput.
  • 🔎 Seeking: Fast random access from disk by datapoint index.
  • 🎞️ Sequences: Datapoints can contain seekable lists of modalities.
  • 🤸 Flexibility: User provides encoders and decoders; examples available.
  • 👥 Sharding: Store datasets into shards to split processing workloads.

Installation

pip install granular

Quickstart

Writing

import granular
import msgpack
import numpy as np

spec = {
    'foo': 'int',      # integer
    'bar': 'utf8[]',   # list of strings
    'baz': 'msgpack',  # packed structure
}

# Or use the provided `granular.encoders`.
encoders = {
    'int': lambda x: x.to_bytes(8, 'little'),
    'utf8': lambda x: x.encode('utf-8'),
    'msgpack': msgpack.packb,
}

with granular.ShardedDatasetWriter(
    directory, spec, encoders, shardlen=1000) as writer:
  for i in range(2500):
    writer.append({'foo': 42, 'bar': ['hello', 'world'], 'baz': {'a': 1})

Files

$ tree directory
.
├── 000000  ├── spec.json
│  ├── refs.bag
│  ├── foo.bag
│  ├── bar.bag
│  └── baz.bag
├── 000001  ├── spec.json
│  ├── refs.bag
│  ├── foo.bag
│  ├── bar.bag
│  └── baz.bag
└── ...

Reading

# Or use the provided `granular.decoders`.
decoders = {
    'int': lambda x: int.from_bytes(x),
    'utf8': lambda x: x.decode('utf-8'),
    'msgpack': msgpack.unpackb,
}

with granular.ShardedDatasetReader(directory, decoders) as reader:
  print(len(reader))    # Number of datapoints in the dataset.
  print(reader.size)    # Dataset size in bytes.
  print(reader.shards)  # Number of shards.

  # Read data points by index. This will read only the relevant bytes from
  # disk. An additional small read is used when caching index tables is
  # disabled, supporting arbitrarily large datasets with minimal overhead.
  assert reader[0] == {'foo': 42, 'bar': ['hello', 'world'], 'baz': {'a': 1}

  # Read a subset of keys of a datapoint. For example, this allows quickly
  # iterating over the metadata fields of all datapoints without accessing
  # expensive image or video modalities.
  mask = {'foo': True, 'baz': True}
  assert reader[0, mask] == {'foo': 42, 'baz': {'a': 1}}

  # Read only a slice of the 'bar' list. Only the requested slice will be
  # fetched from disk. For example, the could be used to load a subsequence of
  # a long video that is stored as list of consecutive MP4 clips.
  mask = {'bar': range(1, 2)}
  assert reader[0, mask] == {'bar': ['world']}

For small datasets where sharding is not necessary, you can also use DatasetReader and DatasetWriter.

For distributed processing using multiple processes or machines, use ShardedDatasetReader and ShardedDatasetWriter and set shardstart to the worker index and shardstep to the total number of workers.

Formats

Granular does not impose a serialization solution on the user. Any words can be used as types, as long as their encoder and decoder functions are provided.

Examples of common encode and decode functions are provided in formats.py. These support Numpy arrays, JPG and PNG images, MP4 videos, and more. They can be used as granular.encoders and granular.decoders.

Questions

If you have a question, please file an issue.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

granular-0.13.0.tar.gz (15.2 kB view details)

Uploaded Source

File details

Details for the file granular-0.13.0.tar.gz.

File metadata

  • Download URL: granular-0.13.0.tar.gz
  • Upload date:
  • Size: 15.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for granular-0.13.0.tar.gz
Algorithm Hash digest
SHA256 f7cc77136b36c99dda4c82477ecf8450387b4b0feab936f79cb107f7b168874f
MD5 e243036a84ed9c08b658d13037238347
BLAKE2b-256 0d0255fce4d814518a73d8214644e91c706088a2454e3011bffd6f73d0a439f0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page