Skip to main content

TensorFlow utilities for efficient TFRecord processing and random access

Project description

tfd-utils

A lightweight Python library for efficient random access to TensorFlow TFRecord files and tar archives, without requiring TensorFlow.

Key Features

  • Unified API: Access TFRecord files and tar archives through the same interface.
  • Random Access: Access any record by key in O(1) time without reading the entire file.
  • Automatic Index Caching: Index is built once and cached to disk; rebuilt automatically when files change.
  • Lightweight & Standalone: TFRecord support requires only numpy, protobuf, and crc32c. No TensorFlow needed.
  • Full TensorFlow Compatibility: Write with tfd_utils, read with TensorFlow (or vice versa). 100% compatible.
  • Multiple File Support: Single files, lists of files, or glob patterns.
  • Tar-to-TFRecord Conversion: CLI tool to batch-convert tar archives to TFRecord format with parallel workers and optional source deletion.

Installation

pip install tfd-utils

Usage

TFRecord Random Access

from tfd_utils import TFRecordRandomAccess

reader = TFRecordRandomAccess("data.tfrecord")
# or multiple files / glob patterns
reader = TFRecordRandomAccess(["train_*.tfrecord", "val_*.tfrecord"])

image_bytes = reader.get_feature("record_1", "image")
record = reader["record_1"]   # all features as Example protobuf
print(f"Total records: {len(reader)}")

Tar Archive Random Access

Tar archives are expected to contain paired files sharing the same stem:

sa_000001.jpg   →  key='sa_000001', feature='jpg'
sa_000001.json  →  key='sa_000001', feature='json'

Both uncompressed (.tar) and compressed (.tar.gz, etc.) archives are supported.

from tfd_utils import TarRandomAccess

reader = TarRandomAccess("archive.tar")
# or glob / list of tars
reader = TarRandomAccess("sa1b/*.tar")

jpg_bytes  = reader.get_feature("sa_000001", "jpg")
json_bytes = reader.get_feature("sa_000001", "json")
record     = reader["sa_000001"]   # {'jpg': bytes, 'json': bytes}
print(f"Total records: {len(reader)}")

Member paths with subdirectory prefixes are handled automatically: ./subdir/foo.jpg → key subdir/foo.

Example: SA-1B Dataset

SA-1B tars are gzip-compressed and contain paired .jpg / .json files per image:

import json
from PIL import Image
import io
from tfd_utils import TarRandomAccess

# Point to one or more SA-1B tar files (compressed tars are supported)
reader = TarRandomAccess("/path/to/sa1b/sa_000020.tar")
# or load multiple shards at once
reader = TarRandomAccess("/path/to/sa1b/*.tar")

# Each key is the image ID (e.g. 'sa_226692')
keys = reader.get_keys()
print(f"Images in this shard: {len(keys)}")

key = keys[0]

# Load the JPEG image
jpg_bytes = reader.get_feature(key, "jpg")
image = Image.open(io.BytesIO(jpg_bytes))

# Load the annotation (segmentation masks, bounding boxes, …)
json_bytes = reader.get_feature(key, "json")
annotation = json.loads(json_bytes)
print(f"Image size : {annotation['image']['width']}x{annotation['image']['height']}")
print(f"Masks      : {len(annotation['annotations'])}")

Writing TFRecords

from tfd_utils.writer import TFRecordWriter
from tfd_utils.pb2 import Example, Features, Feature, BytesList

with TFRecordWriter("data.tfrecord") as writer:
    example = Example(features=Features(feature={
        'key':   Feature(bytes_list=BytesList(value=[b'record_1'])),
        'image': Feature(bytes_list=BytesList(value=[b'<image bytes>'])),
    }))
    writer.write(example.SerializeToString())

Common API (both readers)

reader.get_record(key)                    # full record
reader.get_feature(key, feature_name)     # single feature
reader.get_feature_list(key, feature_name)
reader.get_keys()                         # all keys
reader.get_stats()                        # total_records, total_files, ...
reader.contains_key(key)
reader.rebuild_index()

key in reader                             # __contains__
reader[key]                               # __getitem__ (raises KeyError if missing)
len(reader)                               # __len__

with TarRandomAccess("archive.tar") as r: # context manager
    ...

Advanced Options

# TFRecord: custom key feature name (default 'key')
reader = TFRecordRandomAccess("file.tfrecord", key_feature_name="id")

# Both: custom index file location
reader = TFRecordRandomAccess("file.tfrecord", index_file="my.index")
reader = TarRandomAccess("archive.tar", index_file="my.tar_index")

# Both: control parallelism
reader = TarRandomAccess("*.tar", max_workers=8, use_multiprocessing=True)

CLI

tfd list    /path/to/data.tfrecord
tfd extract /path/to/data.tfrecord record_key
tfd get     /path/to/data.tfrecord:record_key:feature_name

Converting Tar Archives to TFRecord

The tfd convert command converts tar archive(s) to TFRecord files. Each record stores all file extensions as bytes features plus a key feature containing the file stem.

# Convert a single tar
tfd convert /path/to/archive.tar

# Convert all tars in a directory, write to a different output directory
tfd convert /path/to/sa1b/ --output-dir /path/to/output/

# Glob pattern
tfd convert '/path/to/sa1b/sa_0000*.tar' --output-dir /path/to/output/

# Delete each source tar after successful conversion
tfd convert /path/to/sa1b/ --output-dir /path/to/output/ --delete

# Control parallelism (default: 16 workers)
tfd convert /path/to/sa1b/ --output-dir /path/to/output/ --workers 32

Each input foo.tar produces foo.tfrecord in the output directory (default: same directory as the source). A TFRecord produced from SA-1B tars contains these features per record:

Feature Type Content
key bytes File stem, e.g. sa_226692
jpg bytes Raw JPEG image bytes
json bytes Annotation JSON (masks, boxes…)
from tfd_utils import TFRecordRandomAccess
import json

reader = TFRecordRandomAccess("/path/to/output/sa_000000.tfrecord")
jpg_bytes  = reader.get_feature("sa_226692", "jpg")
json_bytes = reader.get_feature("sa_226692", "json")
annotation = json.loads(json_bytes)

TensorFlow Interoperability

import tensorflow as tf

dataset = tf.data.TFRecordDataset("data.tfrecord")
for record in dataset:
    example = tf.train.Example()
    example.ParseFromString(record.numpy())

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tfd_utils-0.4.1.tar.gz (102.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tfd_utils-0.4.1-py3-none-any.whl (27.5 kB view details)

Uploaded Python 3

File details

Details for the file tfd_utils-0.4.1.tar.gz.

File metadata

  • Download URL: tfd_utils-0.4.1.tar.gz
  • Upload date:
  • Size: 102.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for tfd_utils-0.4.1.tar.gz
Algorithm Hash digest
SHA256 495d7504d5e16187c0561428e422cff818d9d05942a2df04f31215b2dd19918f
MD5 31f5fa4bbc56003944548c5c387fac76
BLAKE2b-256 567aeb64fabdc182cc540d11bf437c33bb2685be865fcff27b161cd837bbc8ea

See more details on using hashes here.

Provenance

The following attestation bundles were made for tfd_utils-0.4.1.tar.gz:

Publisher: publish.yml on HarborYuan/tfd-utils

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tfd_utils-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: tfd_utils-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 27.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for tfd_utils-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 20b28f05cf8a4ff02c972a206284527f520c0f5f86fe44934ee2ec94e149d856
MD5 97ad6340f95e444b35052814f6e06f93
BLAKE2b-256 cd411a1a0227b455dae0aff06e1ffd89b1da58edf6685f0ae4ba972a87919ab2

See more details on using hashes here.

Provenance

The following attestation bundles were made for tfd_utils-0.4.1-py3-none-any.whl:

Publisher: publish.yml on HarborYuan/tfd-utils

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page