Skip to main content

TFRecord reader, writer, and PyTorch Dataset.

Project description

TFRecord reader, writer, and PyTorch Dataset

This library allows reading and writing TFRecord files efficiently in Python, and provides an IterableDataset interface for TFRecord files in PyTorch. Both uncompressed and compressed gzip TFRecord are supported.

This library is modified from tfrecord, to remove its binding to tf.Example and support generic TFRecord data.

Installation

pip install tfrecord-dataset

Usage

Basic read & write

import tfrecord_dataset as tfr

writer = tfr.TFRecordWriter('test.tfrecord')
writer.write(b'Hello world!')
writer.write(b'This is a test.')
writer.close()

for x in tfr.tfrecord_iterator('test.tfrecord'):
    print(bytes(x))

TFRecordDataset

Use TFRecordDataset to read TFRecord files in PyTorch.

import torch
from tfrecord_dataset.torch import TFRecordDataset

dataset = TFRecordDataset('test.tfrecord', transform=lambda x: len(x))
loader = torch.utils.data.DataLoader(dataset, batch_size=2)

data = next(iter(loader))
print(data)

Sharded TFRecords

The following TFRecordDataset reads TFRecord data from 8 files in parallel. The name of these 8 files match pattern data-0000?-of-00008.

dataset = TFRecordDataset(data@8', transform=lambda x: len(x))

Data transformation

The reader reads TFRecord payload as bytes. You can pass a callable as the transform argument for parsing the bytes into the desired format, as shown in the simple example above. You can use such transformation for parsing serialized structured data, e.g. protobuf, numpy arrays, images, etc.

Here is another example for reading and decoding images:

import cv2

dataset = TFRecordDataset(
    'data.tfrecord',
    transform=lambda x: {'image':  cv2.imdecode(x, cv2.IMREAD_COLOR)})

Shuffling the data

TFRecordDataset automatically shuffles the data with two mechanisms:

  1. It reads data into a buffer, and randomly yield data from this buffer. Setting to buffer to a larger size (buffer_size) produces better randomness.

  2. For sharded TFRecords, it reads multiple files in parallel. Setting file_parallelism to a larger number also produces better randomness.

Index

Index files are deprecated since v0.2.0. It's no longer required.

Such index files can be generated with:

python -m tfrecord_dataset.tools.tfrecord2idx <tfrecord path> <index path>

Infinite and finite dataset

By default, TFRecordDataset is infinite, meaning that it samples the data forever. You can make it finite by setting num_epochs.

dataset = TFRecordDataset(..., num_epochs=2)

Acknowledgements

This repo is forked from https://github.com/vahidk/tfrecord.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tfrecord_dataset-0.2.0.tar.gz (8.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tfrecord_dataset-0.2.0-py3-none-any.whl (9.5 kB view details)

Uploaded Python 3

File details

Details for the file tfrecord_dataset-0.2.0.tar.gz.

File metadata

  • Download URL: tfrecord_dataset-0.2.0.tar.gz
  • Upload date:
  • Size: 8.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.7

File hashes

Hashes for tfrecord_dataset-0.2.0.tar.gz
Algorithm Hash digest
SHA256 4d919f3c3652f1cc17051b83eaddc0cf476902e272e16ae15b9c57be08e7e7dd
MD5 484a1c42fda9b64fe34e44de210fc402
BLAKE2b-256 1e73b21bc8848a9ba05e25e9e4a23d519e16556377f55c0b45f096b1af1c36c4

See more details on using hashes here.

File details

Details for the file tfrecord_dataset-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for tfrecord_dataset-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e74d202aafc5a8544beff82b2da16e70fc7d115c2e24f89b42c4333232f1f490
MD5 56c851ac31280f7ef0161edd4904438d
BLAKE2b-256 978294d005abff51bc8ddf27088a7ad2ccd1e7d8ba1b5bc66a16d959e8db8e66

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page