Megatron's multi-modal data loader
Reason this release was yanked:
Wheel file was empty
Project description
DISCLAIMER: This package contains research code. APIs may change. It may remain under Megatron or be integrated into NVIDIA's Data Loading Library (DALI)
What is this?
Megatron Energon is the multi-modal data loader of Megatron (you can also use it independently).
It's best at
- loading large training data to train large multi-modal models
- blending many different datasets together
- distributing the work across many nodes and processes of a cluster
- ensuring reproducibility and resumability
- adapting easily to various types of data samples and processing
Try using it together with Megatron Core.
Quickstart
Megatron Energon is a pip-installable python package that offers
- dataset-related classes that you can import in your project
- a command line utility for data preprocessing and conversion
This document is just a quick start. Please also check out the documentation.
Installation
pip install megatron-energon
Or
pip install git+https://github.com/NVIDIA/Megatron-Energon.git
NOTE: We encourage you to install the package (and not just import a local copy). This will ensure you have all the needed dependencies and that you can use the command line tool.
For more details on installing this package, see here.
Usage of command line tool
After installation, the command energon
will be available.
Here are some examples for things you can do:
Command | Description |
---|---|
energon prepare DATASET_ROOT |
Take an existing WebDataset and add the required yaml files to turn it into an energon-compatible dataset |
energon lint DATASET_ROOT |
Verify that the dataset complies with the energon dataset format and that all samples are loadable |
Usage of the library
To get started, pick a WebDataset-compliant dataset and run energon prepare DATASET_ROOT
on it, to run the interactive assistant and create the .nv-meta
folder.
Once done, try to load it from your Python program:
from megatron.energon import get_train_dataset, get_loader
train_loader = get_loader(get_train_dataset(
'/my/dataset/path',
batch_size=32,
shuffle_buffer_size=None,
max_samples_per_sequence=None,
))
for batch in train_loader:
# Do something with batch
# Infer, gradient step, ...
pass
For more details, read the documentation.
Most likely, you'll need your own task encoder.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for megatron_energon-2.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3d5a707a9c5a6e70c69d67c6f584cb197b197948ad7f18cdb2a4534ac2817c7c |
|
MD5 | 0f836dd7aae343f1c84dd0969cd1ec95 |
|
BLAKE2b-256 | e9c823b5ff78c0dd0610a5f288717f5f74af4212fbd7ba3979f0ba55b50846a7 |