Modules, operations and models for computer vision in PyTorch
Project description
Holocron
Implementations of recent Deep Learning tricks in Computer Vision, easily paired up with your favorite framework and model zoo.
Holocrons were information-storage datacron devices used by both the Jedi Order and the Sith that contained ancient lessons or valuable information in holographic form.
Source: Wookieepedia
Table of Contents
Note: support of activation mapper and model summary has been dropped and outsourced to independent packages (torch-cam & torch-scan) to clarify project scope.
Getting started
Prerequisites
- Python 3.6 (or more recent)
- pip
Installation
You can install the package using pypi as follows:
pip install pylocron
or using conda:
conda install -c frgfm pylocron
Usage
nn
Main features
- Activation: SiLU/Swish, Mish, NLReLU
- Loss: Focal Loss, MultiLabelCrossEntropy, LabelSmoothingCrossEntropy, MixupLoss
- Convolutions: NormConv2d, Add2d, SlimConv2d
- Regularization: DropBlock
models
Main features
- Classification: Res2Net (based on the great implementation from gasvn), darknet24, darknet19, darknet53, ResNet, ResNeXt, ReXNet.
- Detection: YOLOv1, YOLOv2
- Segmentation: U-Net, UNet++, UNet3+
ops
Main features
optim
Main features
- Optimizer: LARS, Lamb, RAdam, TAdam and customized versions (RaLars)
- Optimizer wrapper: Lookahead, Scout (experimental)
- Scheduler: OneCycleScheduler
Usage
You can use the OneCycleScheduler
just like any other LR scheduler of pytorch. Please note that it is designed to take step at every iteration. Over the full training, the learning rate should look like:
Here two different parameter groups have been made to illustrate the effect of the scheduler. This implementation was made before PyTorch officially had an implementation. For better support, it is recommended to consider the PyTorch version.
Technical roadmap
The project is currently under development, here are the objectives for the next releases:
- Standardize models: standardize models by task.
- Speed benchmark: compare
holocron.nn
functions execution speed. - Reference scripts: add reference training scripts
Documentation
The full package documentation is available here for detailed specifications. The documentation was built with Sphinx using a theme provided by Read the Docs
Contributing
Please refer to CONTRIBUTING
if you wish to contribute to this project.
License
Distributed under the MIT License. See LICENSE
for more information.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.