Skip to main content

High-performance O(1) random access indexer for Parquet datasets in PyTorch

Project description

Indexed Parquet Dataset Logo

PyPI version Python Version License Documentation

Indexed Parquet Dataset

Indexed Parquet Dataset is a high-performance Python library for O(1) random access to massive datasets in Parquet format.

It is specifically optimized for Deep Learning (PyTorch), consumes minimal memory, and supports advanced features such as Schema Evolution (working with files of different schemas in a single dataset).

Key Features

  • O(1) Random Access: Instantly navigate to any row in a multi-gigabyte dataset without scanning files.
  • 🔄 Schema Evolution: Work with datasets where files have different schemas, missing columns, or renamed fields.
  • 📦 Lazy Loading: Files are opened only when data is requested. Features an efficient LRU handle cache.
  • 🔥 PyTorch Integration: Native support for torch.utils.data.Dataset, including adaptive collate_fn generation.
  • 🛠️ Fluent API: Method chaining: shuffle (global or locality-aware), filter, alias, split, limit, rename, cast, map.
  • 💾 Index Persistence: Save and fast-load the index from a file.
  • 🏗️ Materialization: "Bake" all transformations into new Parquet files via clone().

Architecture

The library remains lightweight, storing only metadata and a row map in RAM:

graph TD
    subgraph RAM ["Application (RAM - Lightweight)"]
        direction TB
        subgraph DS ["IndexedParquetDataset"]
            Indices["Indices Array [np.ndarray]<br/>(Shuffled/Filtered indices)"]
            Meta["Metadata & Schema<br/>(File offsets, column mapping)"]
            Cache["File Handle Cache<br/>(Lazy Loading LRU)"]
        end
        
        User["User Code / PyTorch DataLoader"] -- "dataset[idx]" --> Indices
        Indices -- "Global Index" --> Meta
        Meta -- "Find File & Row Offset" --> Cache
    end
    
    subgraph Storage ["Storage (HDD/SSD/S3-over-FUSE)"]
        F1["data_part_1.parquet"]
        F2["data_part_2.parquet"]
        FN["data_part_N.parquet"]
    end
    
    Cache -- "Lazy Read" --> F1
    Cache -- "Lazy Read" --> F2
    Cache -- "Lazy Read" --> FN
    
    F1 -. "O(1) Row Retrieval" .-> User
    F2 -. "O(1) Row Retrieval" .-> User
    FN -. "O(1) Row Retrieval" .-> User

Installation

From PyPI:

pip install indexed-parquet-dataset

For PyTorch support:

pip install "indexed-parquet-dataset[torch]"

Quickstart

Basic Initialization

from indexed_parquet_dataset import IndexedParquetDataset

# Scans the folder and builds a global row index
ds = IndexedParquetDataset.from_folder("./path/to/data")

print(f"Total rows: {len(ds)}")
print(f"First row: {ds[0]}") # {'id': 1, 'text': '...', ...}

# Random access to any row is instant
sample = ds[999_999]

Transformations (Fluent API)

ds = (IndexedParquetDataset.from_folder("./data")
      .filter(lambda x: x["score"] > 0.5)
      .shuffle(seed=42, rg_buffer=32) # Locality-aware shuffle for best I/O performance
      .alias("text_len", lambda x: len(x["text"]))
      .limit(10000))

# Each row now has a virtual 'text_len' column
print(ds[0]["text_len"])

Usage with PyTorch

from torch.utils.data import DataLoader

ds = IndexedParquetDataset.from_folder("./data", auto_fill=True)
train_ds, val_ds = ds.train_test_split(test_size=0.1)

loader = DataLoader(
    train_ds, 
    batch_size=32, 
    shuffle=True, 
    num_workers=4,
    collate_fn=ds.generate_collate_fn(on_none='fill')
)

Documentation

Full documentation is available on GitHub Pages.

License

Apache 2.0 License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

indexed_parquet_dataset-0.3.1.dev0.tar.gz (3.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

indexed_parquet_dataset-0.3.1.dev0-py3-none-any.whl (25.2 kB view details)

Uploaded Python 3

File details

Details for the file indexed_parquet_dataset-0.3.1.dev0.tar.gz.

File metadata

File hashes

Hashes for indexed_parquet_dataset-0.3.1.dev0.tar.gz
Algorithm Hash digest
SHA256 10ab20d1428dc21d6b4d213834d7bf03a51dc3e46ac47e587eb6bfe1dcb28cef
MD5 75d5068c3460e9f5b7e831e93c32fccf
BLAKE2b-256 2eec0600d31eadcd8f237ca31dc36b5ccbe8e5898ee29321de24f9aa8751b24a

See more details on using hashes here.

Provenance

The following attestation bundles were made for indexed_parquet_dataset-0.3.1.dev0.tar.gz:

Publisher: publish.yml on Laeryid/indexed-parquet-dataset

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file indexed_parquet_dataset-0.3.1.dev0-py3-none-any.whl.

File metadata

File hashes

Hashes for indexed_parquet_dataset-0.3.1.dev0-py3-none-any.whl
Algorithm Hash digest
SHA256 c1314a6edc933cdf4ba6bb6238f16c08549061754b68b9316e68564f8c18d0fa
MD5 794e35dc368d5330c60d13d829d4b5f7
BLAKE2b-256 c928fd559057df9956fb1dd34a063bc652b39476d33d8096109e9bf65a691e9d

See more details on using hashes here.

Provenance

The following attestation bundles were made for indexed_parquet_dataset-0.3.1.dev0-py3-none-any.whl:

Publisher: publish.yml on Laeryid/indexed-parquet-dataset

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page