Skip to main content

Supercharge Your LLM Training with Checkpointable Data Loading

Project description

Epochraft

Python GitHub license Checks status Tests status pypi

Introduction

Epochraft is a data loader library designed with a focus on on-the-fly tokenization and checkpointing, specifically for the streamlined training of LLMs.

Why On-the-Fly Tokenization?

Previous frameworks like GPT-NeoX requires pre-tokenization. That is, we need to tokenize the training data and store it before pretraining. However, this method is cumbersome and requires additional steps. Training can't begin until this is completed. If you change the dataset or the tokenizer, you'll have to recreate again. And, we need to manage the tokenized data.

You may ask "But, isn't on-the-fly tokenization too slow?" The answer is a definitive no.

For instance, the training of Llama2-7B is conducted at the speed of about 3K tokens/sec per GPU (see Table 2). The tokenizer of Llama2 can tokenize at a rate of about 1M tokens/sec with a single CPU process. Even if tokenizing in real-time in the background, you can easily saturate the GPUs. Larger models are even easier. With 13B, you can saturate each GPU by providing 1.5K tokens/sec, and with 70B, by just 300 tokens/sec.

Why Data Loader Checkpointing?

The standard practice of checkpointing in PyTorch involves saving the state_dict of the model and optimizer. However, as we are training LLMs, we should also want to save the state_dict of the data loader.

In the era of training ResNets for 90 epochs, there was no such need. Simply checkpointing at the end of each epoch was enough. But now, in the age of LLMs, we often train around 1 epoch.

In training for 1 epoch, it's necessary to ensure that the data loader can continue from the middle of an epoch as well. After resuming the training, we want to correctly use only the data that has not been used up to that point. Moreover, since the data is quite large, an efficient resumption is needed, not an inefficient method that reads and discards all the data up to that point.

Epochraft: On-the-Fly Tokenization + Checkpointing

Epochraft is designed with the aim of achieving both on-the-fly tokenization and checkpointing. Neither on-the-fly tokenization nor checkpointing are exceptionally difficult features in themselves. However, when attempting to realize both simultaneously, significant constraints arise at the core of the design. That's why no existing libraries are compatible with both features.

In Epochraft, a variety of existing datasets can be used as sources, so it supports a wide range of data formats. Particularly, when using MosaicML Streaming as a source, you can train directly by streaming data from S3, and resumption is efficient.

As Epochraft is a library focused on the training of LLMs, it is equipped with features that are necessary for pretraining and SFT of LLMs. Operations like tokenization and chunking are available out of the box. Additionally, tokenization is performed efficiently using multi-processes.

Quick Start

Installation

pip install epochraft

Example

This is an example of building a typical pretraining dataset. We will soon add other examples such as SFT.

from datasets import load_dataset
from epochraft import CheckpointableDataset
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")

# Various data sources are ssupported. Refer to the explanation below for more details.
source = load_dataset("wikitext", "wikitext-103-raw-v1", split="train", streaming=True)

train_dataset = (
    CheckpointableDataset
    .from_iterable(source, repeat=True)  # Create a CheckpointableDataset from the source
    .tokenize(tokenizer)                 # Tokenize the texts
    .ensure_bos_eos(tokenizer)           # Add BOS and EOS tokens where necessary
    .concat_chunk(1024)                  # Concatenate and chunk the tokens into a fixed length of 1024 tokens
    .batch(8)                            # Group the data into mini-batches with a batch size of 8
    .take(10_000)                        # Limit the dataset to the first 10,000 batches
    .enumerate()                         # Add a "step" field to keep track of the training step
)

for batch in train_dataset:
    step = batch["step"]            # Current number of iteration (int)
    input_ids = batch["input_ids"]  # Input data for this iteration (torch.Tensor)

    # Implement the `step`-th training iteration using `input_ids` here
    ...

Checkpointing

Normally, you would obtain and save the state_dict of the model and optimizer. In addition to that, please also obtain and save the state_dict of the iterator

train_iter = train_dataset.iter()  # `iter(train_dataset)` would also work

for batch in train_iter:
    step = batch["step"]
    ...

    if step % ckpt_freq == 0:
        state_dict = {
            "model": model.state_dict(),
            "optimizer": optimizer.state_dict(),
            "iter": train_iter.state_dict(),
        }
        torch.save(state_dict, ckpt_path)

You can restore the state of the iterator by passing the state_dict to the iter method of the CheckpointableDataset instance.

state_dict = torch.load(ckpt_path)
train_iter = train_dataset.iter(state_dict=state_dict["iter"])

Development

pip install -e .[development]
mypy .; black .; flake8 .; isort .
pytest tests

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

epochraft-0.1.0.dev20230806.tar.gz (20.6 kB view details)

Uploaded Source

Built Distribution

epochraft-0.1.0.dev20230806-py3-none-any.whl (26.5 kB view details)

Uploaded Python 3

File details

Details for the file epochraft-0.1.0.dev20230806.tar.gz.

File metadata

  • Download URL: epochraft-0.1.0.dev20230806.tar.gz
  • Upload date:
  • Size: 20.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for epochraft-0.1.0.dev20230806.tar.gz
Algorithm Hash digest
SHA256 bfa416da35cb7bbe4b06839cbb3b3caeb1211ec7f12bdf75be5514678c4d27c4
MD5 9b2897e7d7d0fb3fd850b84ebb0f3d48
BLAKE2b-256 27bb11fbdb9b3e6de8b17170d3013ae172b59f5615a57ab2031e55d4a2547bd8

See more details on using hashes here.

File details

Details for the file epochraft-0.1.0.dev20230806-py3-none-any.whl.

File metadata

File hashes

Hashes for epochraft-0.1.0.dev20230806-py3-none-any.whl
Algorithm Hash digest
SHA256 6abb83384f2e7d0357e67f6de3e3931af6217d46803a13a746607f3b33606cf3
MD5 19804f7d1900958d018f86cee13f7e4f
BLAKE2b-256 e9268c75fbdd990ce431c631516128c42a61fe10857a7ef8415239b3973f23cc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page