DeltaTorch allows loading training data from DeltaLake tables for training Deep Learning models using PyTorch
Project description
deltatorch
Concept
deltatorch
allows users to directly use DeltaLake
tables as a data source for training using PyTorch.
Using deltatorch
, users can create a PyTorch DataLoader
to load the training data.
We support distributed training using PyTorch DDP as well.
Why yet another data-loading framework?
- Many Deep Learning projects are struggling with efficient data loading, especially with tabular datasets or datasets containing many small images
- Classical Big Data formats like Parquet can help with this issue, but are hard to operate:
- writers might block readers
- Failed write can make the whole dataset unreadable
- More complicated projects might ingest data all the time, even during training
Delta Lake storage format solves all these issues, but PyTorch has no direct support for DeltaLake
datasets.
deltatorch
introduces such support and allows users to use DeltaLake
for training Deep Learning models using PyTorch.
Usage
Requirements
- Python Version > 3.8
pip
orconda
Installation
- with
pip
:
pip install git+https://github.com/delta-incubator/deltatorch
Create PyTorch DataLoader to read our DeltaLake table
To utilize deltatorch
at first, we will need a DeltaLake table containing training data we would like to use for training your PyTorch deep learning model.
There is a requirement: this table must have an autoincrement ID field. This field is used by deltatorch
for sharding and parallelization of loading.
After that, we can use the create_pytorch_dataloader
function to create PyTorch DataLoader, which can be used directly during training.
Below you can find an example of creating a DataLoader for the following table schema :
CREATE TABLE TRAINING_DATA
(
image BINARY,
label BIGINT,
id INT
)
USING delta LOCATION 'path'
After the table is ready we can use the create_pytorch_dataloader
function to create a PyTorch DataLoader :
from deltatorch import create_pytorch_dataloader
from deltatorch import FieldSpec
def create_data_loader(path:str, batch_size:int):
return create_pytorch_dataloader(
# Path to the DeltaLake table
path,
# Autoincrement ID field
id_field="id",
# Fields which will be used during training
fields=[
FieldSpec("image",
# Load image using Pillow
load_image_using_pil=True,
# PyTorch Transform
transform=transform),
FieldSpec("label"),
],
# Number of readers
num_workers=2,
# Shuffle data inside the record batches
shuffle=True,
# Batch size
batch_size=batch_size,
)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file deltatorch-0.0.3.tar.gz
.
File metadata
- Download URL: deltatorch-0.0.3.tar.gz
- Upload date:
- Size: 7.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.5.0 CPython/3.11.3 Darwin/23.0.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 07206b3e98348bd3c58f70ade670ce0149ac506c681ed2e24ced0f1f76f0c155 |
|
MD5 | 42e239ba4f42e61024817dca55a65b19 |
|
BLAKE2b-256 | 814209a597ed1a2df460150a7921b96c42b7318da89bc0c78b1d97ff96051443 |
File details
Details for the file deltatorch-0.0.3-py3-none-any.whl
.
File metadata
- Download URL: deltatorch-0.0.3-py3-none-any.whl
- Upload date:
- Size: 8.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.5.0 CPython/3.11.3 Darwin/23.0.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a8f3726f16ea8f417f0dcdb19b1e6630aeb6c5c701649266300fa18903ec5bb2 |
|
MD5 | bdcd7b39d7ddcb4ed85da8c3472894a2 |
|
BLAKE2b-256 | 4a0556d04a27b8299a05b76e2355c6fcda3007f6ad34710134eca2d937bc8f01 |