Skip to main content

Library and scripts for common LM data utilities (tokenizing, splitting, packing, ...)

Project description

🛠️ datatools: Simple utilities for common data actions

Minimal scripts and reusable functions for implementing common data operations (tokenization, splitting, subsampling, packing, and more).

Built with special support for Mosaic Streaming Datasets (MDS).

Table of contents

Installation

Clone this repo and install via pip install -e . or install from PyPI via pip install datatools-py.

Installation Options

  • Core installation (without Hugging Face datasets support):

    pip install datatools-py
    
  • Full installation (with Hugging Face datasets support):

    pip install datatools-py[datasets]
    # or
    pip install datatools-py[full]
    

The core installation includes all necessary dependencies for working with MDS (Mosaic Streaming Datasets), JSONL, and NumPy files. The Hugging Face datasets library is only required if you need to load HuggingFace datasets, Arrow, or Parquet files.

Library

datatools provides core libraries that can be used to easily build custom data pipelines, specifically through from datatools import load, process.

Core functions

load(path, load_options)

Loads the dataset at the path and automatically infers its format (e.g., compressed JSON, PyArrow, MDS, etc.) based on clues from the file format and directory structure. It also supports MDS dataset over S3 and compressed MDS files (.mds.zstd, .mds.zst).


process(input_dataset, process_fn, output_path, process_options)

Processes an input dataset and writes the results to disk. It supports:

  1. Multi-processing with many CPUs, e.g. ProcessOptions(num_proc=16) (or as flag -w 16)
  2. Slurm array parallelization, e.g. ProcessOptions(slurm_array=True) (or --slurm_array) automatically sets up job_id and num_jobs using Slurm environment variables
  3. Custom indexing, e.g. only working on a subset --index_range 0 30 or using a custom index file --index_path path/to/index.npy See ProcessOptions for details.
  4. By default, output is written as mosaic-streaming MDS shards, which are merged into a single MDS dataset when the job finishes. The code also supports writing to JSONL files (--jsonl) and ndarray files for each column (--ndarray). The shards for these output formats are not automatically merged.

The process_fn should be a function that takes one to three arguments:

  1. A subset of the data with len(...) and .[...] access
  2. The global indices corresponding to the subset (optional)
  3. The process_id for logging or sharding purposes (optional)

Example

from datatools import load, process, ProcessOptions
from transformers import AutoTokenizer

# Load dataset (can be JSON, Parquet, MDS, etc.)
dataset = load("path/to/dataset")

# Setup tokenizer and processing function
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B")
def tokenize_docs(data_subset):
    for item in data_subset:
        # Tokenize text and return dict with tokens and length
        tokens = tokenizer.encode(item["text"], add_special_tokens=False)
        
        # Chunk the text into 1024 token chunks
        for i in range(0, len(tokens), 1024):
            yield {
                "input_ids": tokens[i:i+1024],
                "length": len(tokens[i:i+1024])
            }

# Process dataset with 4 workers and write to disk
process(dataset, tokenize_docs, "path/to/output", process_options=ProcessOptions(num_proc=4))

Scripts

datatools comes with the following default scripts:

  • tokenize: Tokenize datasets per document
  • pack: Pack tokenized documents into fixed sequences
  • peek: Print datasets as JSON to stdout
  • wrangle: Subsample, merge datasets, make random splits (e.g., train/test/validation), etc.
  • merge_index: Merge Mosaic streaming datasets in subfolders into a larger dataset

Run <script> --help for detailed arguments. Many scripts automatically include all arguments from ProcessOptions (e.g., number of processes -w <processes>) and LoadOptions.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datatools_py-0.5.tar.gz (22.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

datatools_py-0.5-py3-none-any.whl (25.7 kB view details)

Uploaded Python 3

File details

Details for the file datatools_py-0.5.tar.gz.

File metadata

  • Download URL: datatools_py-0.5.tar.gz
  • Upload date:
  • Size: 22.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for datatools_py-0.5.tar.gz
Algorithm Hash digest
SHA256 e6d7e5fe79bb9cd4b20317c30c5bc995a709a80466038d8062ab01d53c8a7049
MD5 2d98bc5c2c63dd5b415ad4b865622cc1
BLAKE2b-256 98e08f1a8d3e6d2f3fd3d1b1a365c63dfaeb0cfad3accbbebe5f31261a8614ab

See more details on using hashes here.

File details

Details for the file datatools_py-0.5-py3-none-any.whl.

File metadata

  • Download URL: datatools_py-0.5-py3-none-any.whl
  • Upload date:
  • Size: 25.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for datatools_py-0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 533b23ca70db549c1032e31418f5519118ee2cbf0afab846ff2d83f39ebf7935
MD5 d21c176509c68804228e0db58dec614a
BLAKE2b-256 6a0fbfef97d55f3b3a79ed024839b012227f65901d679b677d02948be80e37d3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page