Distributed preprocessing for deep learning.
Tensorcom is a way of loading training data into deep learning frameworks quickly and portably. You can write a single data loading/augmentation pipeline and train one or more jobs in the same or different frameworks with it.
Both Keras and PyTorch can use the Python
Connection object for input,
but MessagePack and ZMQ libraries exist in all major languages, making
it easy to write servers and input operators for any framework.
Tensorcom replaces the use of
multiprocessing in Python for that purpose.
Both use separate processes for loading and augmentation, but by making
the processes and communications explicit, you gain some significant advantages:
- the same augmentation pipeline can be used with different DL frameworks
- augmentation processes can easily be run on multiple machines
- output from a single automentation pipeline can be shared by many training jobs
- you can start up and test the augmentation pipeline before you start the Dl jobs
- DL frameworks wanting to use
tensorcomonly need a small library to handle input
tensorcom for training is very simple. First, start up a data server;
for Imagenet, there are two example jobs. The
illustrates how to use the standard PyTorch Imagenet
DataLoader to serve
$ serve-imagenet-dir -d /data/imagenet -b 64 zpub://127.0.0.1:7880
The server will give you information about the rate at which it serves image batches. Your training loop then becomes very simple:
training = tensorcom.Connection("zsub://127.0.0.1:7880", epoch=1000000) for xs, ys in training: train_batch(xs, ys)
If you want multiple jobs for augmentation, just use more publishers using
Bash-style brace notation:
Note that you can start up multiple training jobs connecting to the same server.
Command Line Tools
There are some command line programs to help with developing and debugging these jobs:
- tensormon -- connect to a data server and monitor throughput
- tensorshow -- show images from input batches
- tensorstat -- compute statistics over input data samples
- serve-imagenet-dir -- serve Imagenet data from a file system using PyTorch
- serve-imagenet-shards -- serve Imagenet from shards using
- keras.ipynb -- simple example of using Keras with tensorcom
- pytorch.ipynb -- simple example of using PyTorch with tensorcom
There is no official standard for ZMQ URLs. This library uses the following notation:
- zpush / zpull -- standard PUSH/PULL sockets
- zrpush / zrpull -- reverse PUSH/PULL connections (PUSH socket is server / PULL socket connects)
- zpub / zsub -- standard PUB/SUB sockets
- zrpub / zrsub -- reverse PUB/SUB connections
The pub/sub servers allow the same augmentation pipeline to be shared by multiple learning jobs.
Default transport is TCP/IP, but you can choose IPC as in
The major way of interacting with the library is through the
It simply gives you an iterator over training samples.
Data is encoded in a simple binary tensor format; see
codec.py for details.
The same format can also be used for saving and loading lists of
tensors from disk (extension:
Data is encoded on 64 byte aligned boundaries to allow easy memory
mapping and direct use by CPUs and GPUs.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size tensorcom-0.1.0-py3-none-any.whl (16.2 kB)||File type Wheel||Python version py3||Upload date||Hashes View|
|Filename, size tensorcom-0.1.0.tar.gz (14.4 kB)||File type Source||Python version None||Upload date||Hashes View|
Hashes for tensorcom-0.1.0-py3-none-any.whl