Skip to main content

PyTorch based library focused on data processing and input pipelines in general.

Project description

torchdata Logo


Version Docs Tests Coverage Style PyPI Python PyTorch Docker Roadmap
Version Documentation Tests Coverage codebeat PyPI Python PyTorch Docker Roadmap

torchdata is PyTorch oriented library focused on data processing and input pipelines in general.

It extends torch.utils.data.Dataset and equips it with functionalities known from tensorflow.data like map or cache (with some additions unavailable in aforementioned).

All of that with minimal interference (single call to super().__init__()) in original PyTorch's datasets.

:wrench: Functionalities

  • Use map, apply, reduce or filter
  • cache data in RAM/disk/your own method (even partially, say first 20%)
  • Full PyTorch's Dataset and IterableDataset support (including torchvision)
  • General torchdata.maps like Flatten or Select
  • Extensible interface (your own cache methods, cache modifiers, maps etc.)
  • Concrete torchdata.datasets designed for file reading and other general tasks

:bulb: Mini examples

  • Create image dataset, convert it to Tensors, cache and concatenate with smoothed labels:
import torchdata
import torchvision

class Images(torchdata.Dataset): # Different inheritance
    def __init__(self, path: str):
        super().__init__() # This is the only change
        self.files = [file for file in pathlib.Path(path).glob("*")]

    def __getitem__(self, index):
        return Image.open(self.files[index])

    def __len__(self):
        return len(self.files)


images = Images("./data").map(torchvision.transforms.ToTensor()).cache()

You can concatenate above dataset with another (say labels) and iterate over them as per usual:

for data, label in images | labels:
    # Do whatever you want with your data
  • Cache first 1000 samples in memory, save the rest on disk in folder ./cache:
images = (
    ImageDataset.from_folder("./data").map(torchvision.transforms.ToTensor())
    # First 1000 samples in memory
    .cache(torchdata.modifiers.UpToIndex(1000, torchdata.cachers.Memory()))
    # Sample from 1000 to the end saved with Pickle on disk
    .cache(torchdata.modifiers.FromIndex(1000, torchdata.cachers.Pickle("./cache")))
    # You can define your own cachers, modifiers, see docs
)

To see what else you can do please check torchdata documentation

:unlock: Installation

:snake: pip

Latest release:

pip install --user torchdata

Nightly:

pip install --user torchdata-nightly

:whale2: Docker

CPU standalone and various versions of GPU enabled images are available at dockerhub.

For CPU quickstart, issue:

docker pull szymonmaszke/torchdata:18.04

Nightly builds are also available, just prefix tag with nightly_. If you are going for GPU image make sure you have nvidia/docker installed and it's runtime set.

:question: Contributing

If you find any issue or you think some functionality may be useful to others and fits this library, please open new Issue or create Pull Request.

To get an overview of thins one can do to help this project, see Roadmap

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for torchdata-nightly, version 1571727923
Filename, size File type Python version Upload date Hashes
Filename, size torchdata_nightly-1571727923-py3-none-any.whl (26.7 kB) File type Wheel Python version py3 Upload date Hashes View hashes
Filename, size torchdata-nightly-1571727923.tar.gz (21.8 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page