The toolkit for fast Deep Learning experiments in Computer Vision
Project description
The toolkit for fast Deep Learning experiments in Computer Vision
A day-to-day Computer Vision Engineer backpack
TorchOk is based on PyTorch and utilizes PyTorch Lightning for training pipeline routines.
The toolkit consists of:
- Neural Network models which are proved to be the best not only on PapersWithCode but in practice. All models are under plug&play interface that easily connects backbones, necks and heads for reuse across tasks
- Out-of-the-box support of common Computer Vision tasks: classification, segmentation, image representation and detection coming soon
- Commonly used datasets, image augmentations and transformations (from Albumentations)
- Fast implementations of retrieval metrics (with the help of FAISS and ranx) and lots of other metrics from torchmetrics
- Export models to ONNX and the ability to test the exported model without changing the datasets
- All components can be customized by inheriting the unified interfaces: Lightning's training loop, tasks, models, datasets, augmentations and transformations, metrics, loss functions, optimizers and LR schedulers
- Training, validation and testing configurations are represented by YAML config files and managed by Hydra
- Only straightforward training techniques are implemented. No whistles and bells
Installation
pip
Installation via pip can be done in two steps:
- Install PyTorch that meets your hardware requirements via official instructions
- Install TorchOk by running
pip install --upgrade torchok
Conda
To remove the previous installation of TorchOk environment, run:
conda remove --name torchok --all
To install TorchOk locally, run:
conda env create -f environment.yml
This will create a new conda environment torchok with all dependencies.
Docker
Another way to install TorchOk is through Docker. The built image supports SSH access, Jupyter Lab and Tensorboard ports exposing. If you don't need any of this, just omit the corresponding arguments. Build the image and run the container:
docker build -t torchok --build-arg SSH_PUBLIC_KEY="<public key>" .
docker run -d --name <username>_torchok --gpus=all -v <path/to/workdir>:/workdir -p <ssh_port>:22 -p <jupyter_port>:8888 -p <tensorboard_port>:6006 torchok
Getting started
The folder examples/configs
contains YAML config files with some predefined training and inference configurations.
Train
For a training example, we can use the default configuration examples/configs/classification_cifar10.yml
, where the CIFAR-10 dataset and the classification task are specified. The CIFAR-10 dataset will be automatically downloaded into your ~/.cache/torchok/data/cifar10
folder (341 MB).
To train on all available GPU devices (default config):
python -m torchok -cp ../examples/configs -cn classification_cifar10
To train on all available CPU cores:
python -m torchok -cp ../examples/configs -cn classification_cifar10 trainer.accelerator='cpu'
During the training you can access the training and validation logs by starting a local TensorBoard:
tensorboard --logdir ~/.cache/torchok/logs/cifar10
Export to ONNX
TODO
Run ONNX model
For the ONNX model run, we can use the examples/configs/onnx_infer.yaml
.
But first we need to define the field path_to_onnx
.
To test ONNX model:
python test.py -cp examples/configs -cn onnx_infer +entrypoint=test
To predict ONNX model:
python test.py -cp examples/configs -cn onnx_infer +entrypoint=predict
Run tests
python -m unittest discover -s tests/ -p "test_*.py"
To be added soon (TODO)
Tasks
- MOBY (unsupervised training)
- DetectionTask
- InstanceSegmentationTask
Backbones
- Swin-v2
- HRNet
- ViT
- EfficientNet
- MobileNetV3
Segmentation models
- HRNet neck + OCR head
- U-Net neck
Detection models
- YOLOR neck + head
- DETR neck + head
Datasets
- Stanford Online Products
- Cityscapes
- COCO
Losses
- Pytorch Metric Learning losses
- NT-ext (for unsupervised training)
Metrics
- Segmentation IoU
- Segmentation Dice
- Detection metrics
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file torchok-0.4.8.tar.gz
.
File metadata
- Download URL: torchok-0.4.8.tar.gz
- Upload date:
- Size: 88.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.14 CPython/3.9.12 Linux/5.10.102.1-microsoft-standard-WSL2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e855bb4bd0fd9cbf1c9020021b034006f0838415abe0205e6c06dd598cc56a6c |
|
MD5 | 0580b52d4689444cf5ce9989f1df94db |
|
BLAKE2b-256 | 1d9f5aebf4306c2e7e4a42026ee0fe3a92c5f3f5c1043787b3b81740fda184c7 |
File details
Details for the file torchok-0.4.8-py3-none-any.whl
.
File metadata
- Download URL: torchok-0.4.8-py3-none-any.whl
- Upload date:
- Size: 116.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.14 CPython/3.9.12 Linux/5.10.102.1-microsoft-standard-WSL2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cb5b6a0f1a2e57ac139ca71adc2e8627aaf8c8de8530652b5ed5be71643d1752 |
|
MD5 | cf5500519f9abf7a41351771eea65aac |
|
BLAKE2b-256 | 194af7698197e0ac49b9af84dffe994da87fac8c992089bd129608eef8f362d8 |