Skip to main content

High-performance O(1) random access indexer for Parquet datasets in PyTorch

Project description

Indexed Parquet Dataset Logo

PyPI version Python Version License Documentation

Indexed Parquet Dataset

Indexed Parquet Dataset is a high-performance Python library for O(1) random access to massive datasets in Parquet format.

It is specifically optimized for Deep Learning (PyTorch), consumes minimal memory, and supports advanced features such as Schema Evolution (working with files of different schemas in a single dataset).

Key Features

  • O(1) Random Access: Instantly navigate to any row in a multi-gigabyte dataset without scanning files.
  • 🔄 Schema Evolution: Work with datasets where files have different schemas, missing columns, or renamed fields.
  • 📦 Lazy Loading: Files are opened only when data is requested. Features an efficient LRU handle cache.
  • 🔥 PyTorch Integration: Native support for torch.utils.data.Dataset, including adaptive collate_fn generation.
  • 🛠️ Fluent API: Method chaining: shuffle (global or locality-aware), filter, alias, split, limit, rename, cast, map.
  • 💾 Index Persistence: Save and fast-load the index from a file.
  • 🏗️ Materialization: "Bake" all transformations into new Parquet files via clone().

Architecture

The library remains lightweight, storing only metadata and a row map in RAM:

graph TD
    subgraph RAM ["Application (RAM - Lightweight)"]
        direction TB
        subgraph DS ["IndexedParquetDataset"]
            Indices["Indices Array [np.ndarray]<br/>(Shuffled/Filtered indices)"]
            Meta["Metadata & Schema<br/>(File offsets, column mapping)"]
            Cache["File Handle Cache<br/>(Lazy Loading LRU)"]
        end
        
        User["User Code / PyTorch DataLoader"] -- "dataset[idx]" --> Indices
        Indices -- "Global Index" --> Meta
        Meta -- "Find File & Row Offset" --> Cache
    end
    
    subgraph Storage ["Storage (HDD/SSD/S3-over-FUSE)"]
        F1["data_part_1.parquet"]
        F2["data_part_2.parquet"]
        FN["data_part_N.parquet"]
    end
    
    Cache -- "Lazy Read" --> F1
    Cache -- "Lazy Read" --> F2
    Cache -- "Lazy Read" --> FN
    
    F1 -. "O(1) Row Retrieval" .-> User
    F2 -. "O(1) Row Retrieval" .-> User
    FN -. "O(1) Row Retrieval" .-> User

Installation

From PyPI:

pip install indexed-parquet-dataset

For PyTorch support:

pip install "indexed-parquet-dataset[torch]"

Quickstart

Basic Initialization

from indexed_parquet_dataset import IndexedParquetDataset

# Scans the folder and builds a global row index
ds = IndexedParquetDataset.from_folder("./path/to/data")

print(f"Total rows: {len(ds)}")
print(f"First row: {ds[0]}") # {'id': 1, 'text': '...', ...}

# Random access to any row is instant
sample = ds[999_999]

Transformations (Fluent API)

ds = (IndexedParquetDataset.from_folder("./data")
      .filter(lambda x: x["score"] > 0.5)
      .shuffle(seed=42, rg_buffer=32) # Locality-aware shuffle for best I/O performance
      .alias("text_len", lambda x: len(x["text"]))
      .limit(10000))

# Each row now has a virtual 'text_len' column
print(ds[0]["text_len"])

Usage with PyTorch

from torch.utils.data import DataLoader

ds = IndexedParquetDataset.from_folder("./data", auto_fill=True)
train_ds, val_ds = ds.train_test_split(test_size=0.1)

loader = DataLoader(
    train_ds, 
    batch_size=32, 
    shuffle=True, 
    num_workers=4,
    collate_fn=ds.generate_collate_fn(on_none='fill')
)

Documentation

Full documentation is available on GitHub Pages.

License

Apache 2.0 License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

indexed_parquet_dataset-0.3.10.dev0.tar.gz (3.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

indexed_parquet_dataset-0.3.10.dev0-py3-none-any.whl (30.0 kB view details)

Uploaded Python 3

File details

Details for the file indexed_parquet_dataset-0.3.10.dev0.tar.gz.

File metadata

File hashes

Hashes for indexed_parquet_dataset-0.3.10.dev0.tar.gz
Algorithm Hash digest
SHA256 8f4d9fcc05505998b0f53286c87f1d82c68cd5425ef8bb01432b675b3af3ecad
MD5 4a739b16e18e660de4cf4e9de9249f4a
BLAKE2b-256 b37dacb85d463f988f951c5d3faf438620c5efe521b91873ce7a6cc0b319299b

See more details on using hashes here.

Provenance

The following attestation bundles were made for indexed_parquet_dataset-0.3.10.dev0.tar.gz:

Publisher: publish.yml on Laeryid/indexed-parquet-dataset

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file indexed_parquet_dataset-0.3.10.dev0-py3-none-any.whl.

File metadata

File hashes

Hashes for indexed_parquet_dataset-0.3.10.dev0-py3-none-any.whl
Algorithm Hash digest
SHA256 479d3ec4cc35fd75d65c2b9f191834bf2fec350d252b20c665a3f0e28dbe6370
MD5 55b4ee635663f57f704916dba7c9b598
BLAKE2b-256 d9d3335ef1b872161c7e900872d9592a55b36eb918d22907cb13015b67a28f94

See more details on using hashes here.

Provenance

The following attestation bundles were made for indexed_parquet_dataset-0.3.10.dev0-py3-none-any.whl:

Publisher: publish.yml on Laeryid/indexed-parquet-dataset

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page