Opinionated declarative utility library for writing PyTorch dataset classes
Project description
What is it?
Opinionated declarative utility library for writing dataset classes. Intended for small pytorch experiments.
Rationale
Pytorch dataset loading often involves certain common tasks:
- Load tensors or values from a filelist
- Truncate sequence/spatial dims to a maximum length
- Drop items that don't satisfy particular requirements
- Pad sequence/spatial dims to a multiple of a number or a maximum per-batch length
- Pad sequence/spatial dims in groups across multiple data fields in a batch
- Or (on training datasets only) randomly subsample sequence/spatial dims to meet a maximum length constraint, and add a "length" field for the pre-padding lengths
- Apply data augmentations
Implementing these tasks is often highly repetitive and error prone.
Dataset loading code can be further simplified by making certain assumptions:
- All data to be loaded takes the form of either a literal or a single file tensor which can be loaded from disk.
- Each dataset class takes only one filelist.
- Filelists contain only paths to tensors or python literals (such as class IDs).
Example usage
We use an example a 3-column dataset specified as a filelist:
# filelist.txt
test/test_files/tensor1_0.pt|test/test_files/tensor2_0.pt|0
test/test_files/tensor1_1.pt|test/test_files/tensor2_1.pt|1
test/test_files/tensor1_2.pt|test/test_files/tensor2_2.pt|2
When creating a dt.Dataset we specify names and datatypes for the columns in order.
import datatia as dt
dataset = dt.Dataset(filelist='filelist.txt',
field_specs=[
dt.FieldSpec(name='tensor1', datatype=torch.Tensor),
dt.FieldSpec(name='tensor2', datatype=torch.Tensor),
dt.FieldSpec(name='id', datatype=int),
],
actions=[dt.PadGroup(fields=['tensor1', 'tensor2'],
dims=[0, 1], values=[0, 0], to_multiple=[4, 5])])
loader = dataset.loader(batch_size=4)
batch = next(iter(loader))
The PadGroup action pads dimensions within a group of tensor columns for a
batch to either the next largest common multiple of a number (to_multiple), to
a fixed length (to_length), or to the maximum size of the dimensions within
the batch.
See datatia/actions.py for other actions (Truncate, Drop, PreMap, LiveMapRow, RandomSubsample, PadGroup) and datatia/datatia.py for FieldSpec and dt.Dataset API.
Action order
Actions may be provided to the API in any order, but they are always executed in a predefined order:
When the dataset initializes:
TruncatePreMap(only works on in-memory tensors)DropBefore collation:LiveMapRowRandomSubsampleDuring collation:PadGroup
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file datatia-0.4.3.tar.gz.
File metadata
- Download URL: datatia-0.4.3.tar.gz
- Upload date:
- Size: 10.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ace2328f06fe8b7fd120283ffd1f411528cf113e8472222f84839349d57bed95
|
|
| MD5 |
1ba0adfc48e298a9a19fc340ed6b0c20
|
|
| BLAKE2b-256 |
70ae3bdff2a0bc17ea831208b762664a9f4edd750ccd03a6d654029f24e62b0d
|
File details
Details for the file datatia-0.4.3-py3-none-any.whl.
File metadata
- Download URL: datatia-0.4.3-py3-none-any.whl
- Upload date:
- Size: 9.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cc070501238b86e9d901107451f3e92452e85349f9938148e42e6f072370d63e
|
|
| MD5 |
258dc5baff90ff5bec98d4520b4790a0
|
|
| BLAKE2b-256 |
5f69f4397bd8f89b07a1167c2b7ebab41fdf1cce64d8dc3f457488976f8c9e44
|