Supercharge Your LLM Training with Checkpointable Data Loading
Project description
Epochraft
Introduction
Epochraft is a data loader library optimized for the streamlined training of LLMs, featuring streaming from cloud storage, on-the-fly tokenization, and iterator checkpointing. The name comes from a fusion of "epoch" and "craft".
Streaming from Cloud Stoarge
Storing the vast datasets required for pretraining LLMs on local disks can be daunting. Even when it is feasible, transferring the data prior to training can be cumbersome and time consuming.
Epochraft offers a wide array of storage solutions, including S3, GCS, Azure Blob Storage, HDFS, WebHDFS, HTTP, HTTPS, SFTP, and the local filesystem (facilitated by smart-open). One of its salient features is the ability to train while concurrently downloading data. Due to its streaming-based architecture, a complete shuffle of data isn't possible. However, Epochraft achieves a level of shuffling by simultaneously accessing multiple data shards, intermixing the incoming data, and then performing an additional shuffle within a predetermined buffer size.
Additionally, it also supports Python's sequential or iterable interfaces. For instance, it can utilize the Hugging Face Datasets. While it might seem there's little benefit to using Epochraft with such small datasets, this enables the use of the same codebase for both SFT and pretraining.
On-the-Fly Tokenization
Some of previous frameworks require pre-tokenization. This means that one has to tokenize the training data and then store it before pretraining. This is cumbersome. Training cannot begin until this step is completed. Moreover, if there are changes to the dataset or the tokenizer, you have to repeat this step again. Furthermore, there's added responsibility of managing tokenized data.
Now, you might wonder, "Isn't on-the-fly tokenization too slow?" The answer is a resounding no.
For instance, the training of Llama2-7B is conducted at the speed of approximately 3K tokens/sec per GPU (as seen in Table 2). The tokenizer of Llama2 can process at a rate of neraly 1M tokens/sec with a single CPU process. This means that even when tokenizing in real-time, the GPUs can be fully utilized without a bottleneck. And for larger models, the situation becomes even more favorable. For a 13B model, a rate of 1.5K tokens/sec is sufficient to saturate each GPU, while for a 70B model, only 300 tokens/sec is necessary.
Data Loader Checkpointing
Beyond the state_dicts of models and optimizers, shouldn't we consider saving the state_dict of the data loader as well?
During the times when training ResNets for 90 epochs was the norm, this wasn’t a concern. A checkpoint at the end of each epoch sufficed. However, in the current era of LLMs, training often revolves around a single epoch.
When training for just 1 epoch, it becomes crucial to ensure that the data loader can pick up from where it left off in the middle of an epoch. Upon resuming training, it's vital to process only the data that hasn't been utilized up to that interruption point. Given the vastness of the data, an efficient resumption mechanism is essential.
Quick Start
Installation
pip install epochraft
Example
This is an example of building a typical pretraining dataset. We will soon add other examples such as SFT.
from epochraft import CheckpointableDataset
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
# `{00..99}` will be expanded (see `braceexpand`)
url = "s3://.../cc-100/cc-100_{00..99}.jsonl"
train_dataset = (
CheckpointableDataset
.from_files(url, repeat=True, shuffle_shards=True)
.tokenize(tokenizer) # Tokenize the texts
.ensure_bos_eos(tokenizer) # Add BOS and EOS tokens where necessary
.concat_chunk(1024) # Concatenate and chunk the tokens into a fixed length of 1024 tokens
.shuffle(1000) # Shuffle the sequences using a buffer of size 1000
.batch(8) # Group the data into mini-batches with a batch size of 8
)
for batch in train_dataset:
input_ids = batch["input_ids"] # Input data for this iteration (torch.Tensor)
# Implement the training iteration using `input_ids` here
...
Checkpointing
Normally, you would obtain and save the state_dict
of the model and optimizer. In addition to that, please also obtain and save the state_dict
of the iterator
train_iter = train_dataset.iter() # Same meaning as `iter(train_dataset)`
for batch in train_iter:
step = batch["step"]
...
if step % ckpt_freq == 0:
state_dict = {
"model": model.state_dict(),
"optimizer": optimizer.state_dict(),
"iter": train_iter.state_dict(),
}
torch.save(state_dict, ckpt_path)
Resumption
You can restore the state of the iterator by passing the state_dict
to the iter method of the CheckpointableDataset
instance.
state_dict = torch.load(ckpt_path)
train_iter = train_dataset.iter(state_dict=state_dict["iter"])
Development
pip install -e .[development]
mypy .; black .; flake8 .; isort .
pytest tests
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file epochraft-0.1.0.dev20230817.tar.gz
.
File metadata
- Download URL: epochraft-0.1.0.dev20230817.tar.gz
- Upload date:
- Size: 26.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0ca01dfa4eaa6a0ce0e4bed627f5215c0485c272af6643af3ec8a8fd7c814acb |
|
MD5 | fc5ebfadf777fd1e63454656d2944071 |
|
BLAKE2b-256 | f9346b08c4c6912b491233a600507af9639a492d6afca6e27fe21078aad2a6d2 |
File details
Details for the file epochraft-0.1.0.dev20230817-py3-none-any.whl
.
File metadata
- Download URL: epochraft-0.1.0.dev20230817-py3-none-any.whl
- Upload date:
- Size: 35.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2103bf91a57325f39aed2cd261186ad851930d1a607706a969624d972819bebc |
|
MD5 | 06ec9e1ca51386d22098e3524fcd7a2d |
|
BLAKE2b-256 | 9b7a95a75d3d6e31cc996949db068a1f7588af4408c135d1dc60e71180887108 |