Luxonis training framework for seamless training of various neural networks.
Project description
Luxonis Training Framework
Luxonis training framework (luxonis-train
) is intended for training deep learning models that can run fast on OAK products.
The project is in an alpha state - please report any feedback.
Table Of Contents
Installation
luxonis-train
is hosted on PyPi and can be installed with pip
as:
pip install luxonis-train
This command will also create a luxonis_train
executable in your PATH
.
See luxonis_train --help
for more information.
Usage
The entire configuration is specified in a yaml
file. This includes the model
structure, used losses, metrics, optimizers etc. For specific instructions and example
configuration files, see Configuration.
Data Preparation
This library requires data to be in the Luxonis Dataset Format.
For instructions on how to create a dataset in the LDF, follow the examples in the luxonis-ml repository.
Training
Once you've created your config.yaml
file you can train the model using this command:
luxonis_train train --config config.yaml
If you wish to manually override some config parameters you can do this by providing the key-value pairs. Example of this is:
luxonis_train train --config config.yaml trainer.batch_size 8 trainer.epochs 10
where key and value are space separated and sub-keys are dot (.
) separated. If the configuration field is a list, then key/sub-key should be a number (e.g. trainer.preprocessing.augmentations.0.name RotateCustom
).
Tuning
To improve training performance you can use Tuner
for hyperparameter optimization.
To use tuning, you have to specify tuner section in the config file.
To start the tuning, run
luxonis_train tune --config config.yaml
You can see an example tuning configuration here.
Exporting
We support export to ONNX
, and DepthAI .blob format
which is used for OAK cameras. By default, we export to ONNX
format.
To use the exporter, you have to specify the exporter section in the config file.
Once you have the config file ready you can export the model using
luxonis_train export --config config.yaml
You can see an example export configuration here.
Customizations
We provide a registry interface through which you can create new nodes, losses, metrics, visualizers, callbacks, optimizers, and schedulers.
Registered components can be then referenced in the config file. Custom components need to inherit from their respective base classes:
- Node - BaseNode
- Loss - BaseLoss
- Metric - BaseMetric
- Visualizer - BaseVisualizer
- Callback - Callback from lightning.pytorch.callbacks
- Optimizer - Optimizer from torch.optim
- Scheduler - LRScheduler from torch.optim.lr_scheduler
Here is an example of how to create a custom components:
from torch.optim import Optimizer
from luxonis_train.utils.registry import OPTIMIZERS
from luxonis_train.attached_modules.losses import BaseLoss
@OPTIMIZERS.register_module()
class CustomOptimizer(Optimizer):
...
# Subclasses of BaseNode, LuxonisLoss, LuxonisMetric
# and BaseVisualizer are registered automatically.
class CustomLoss(BaseLoss):
# This class is automatically registered under `CustomLoss` name.
def __init__(self, k_steps: int, **kwargs):
super().__init__(**kwargs)
...
And then in the config you reference this CustomOptimizer
and CustomLoss
by their names:
losses:
- name: CustomLoss
params: # additional parameters
k_steps: 12
For more information on how to define custom components, consult the respective in-source documentation.
Credentials
Local use is supported by default. In addition, we also integrate some cloud services which can be primarily used for logging and storing. When these are used, you need to load environment variables to set up the correct credentials.
You have these options how to set up the environment variables:
- Using standard environment variables
- Specifying the variables in a
.env
file. If a variable is both in the environment and present in.env
file, the exported variable takes precedense. - Specifying the variables in the ENVIRON section of the config file. Note that this is not a recommended way. Variables defined in config take precedense over environment and
.env
variables.
S3
If you are working with LuxonisDataset that is hosted on S3, you need to specify these env variables:
AWS_ACCESS_KEY_ID=**********
AWS_SECRET_ACCESS_KEY=**********
AWS_S3_ENDPOINT_URL=**********
MLFlow
If you want to use MLFlow for logging and storing artifacts you also need to specify MLFlow-related env variables like this:
MLFLOW_S3_BUCKET=**********
MLFLOW_S3_ENDPOINT_URL=**********
MLFLOW_TRACKING_URI=**********
WanDB
If you are using WanDB for logging, you have to sign in first in your environment.
POSTGRESS
There is an option for remote storage for Tuning. We use POSTGRES and to connect to the database you need to specify the folowing env variables:
POSTGRES_USER=**********
POSTGRES_PASSWORD=**********
POSTGRES_HOST=**********
POSTGRES_PORT=**********
POSTGRES_DB=**********
Contributing
If you want to contribute to the development, install the dev version of the package:
pip install luxonis-train[dev]
Consult the Contribution guide for further instructions.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for luxonis_train-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ea407435ce483f2183f753ecd9474ee5ea04a24ebbecb1a69f5f90c293259385 |
|
MD5 | 38bbb13cb5c6578e6e7a532075377f02 |
|
BLAKE2b-256 | 6622c1bd2bb0638d3d861a2b0fce697e96fe3c4cd53b5523c7ce4db7362448b4 |