Skip to main content

Template to start your deep learning project based on `PyTorchLightning` for rapid prototyping.

Project description

GitHub release (latest SemVer) Python Version Tests Conventional Commits

Black pre-commit

fulmo

Template to start your deep learning project based on PyTorchLightning for rapid prototyping.

Contents


Why Lightning + Hydra + Albumentations?

  • PyTorch Lightning provides great abstractions for well structured ML code and advanced features like checkpointing, gradient accumulation, distributed training, etc.
  • Hydra provides convenient way to manage experiment configurations and advanced features like overriding any config parameter from command line, scheduling execution of many runs, etc.
  • Albumentations (Optional) provides many image augmentation. Albumentations supports all common computer vision tasks such as classification, semantic segmentation, instance segmentation, object detection, and pose estimation.

Features

Pipelines based on hydra-core configs and PytorchLightning modules

  • Predefined folder structure. Modularity: all abstractions are split into different submodule
  • Rapid Experimentation. Thanks to automating pipeline with config files and hydra command line superpowers
  • Little Boilerplate. So pipeline can be easily modified
  • Main Configuration. Main config file specifies default training configuration
  • Experiment Configurations. Stored in a separate folder, they can be composed out of smaller configs, override chosen parameters or define everything from scratch
  • Experiment Tracking. Many logging frameworks can be easily integrated
  • Logs. All logs (checkpoints, data from loggers, chosen hparams, etc.) are stored in a convenient folder structure imposed by Hydra
  • Automates PyTorch Lightning training pipeline with little boilerplate, so it can be easily modified
  • Augmentations with albumentations described in a yaml config.
  • Support of timm models, pytorch-optimizer and TorchMetrics
  • Exponential Moving Average for a more stable training, and Stochastic Moving Average for a better generalization and just overall performance.

Project structure

The directory structure of new project looks like this:

├── src
│   ├── fulmo
│   │   ├── callbacks               <- PyTorch Lightning callbacks
│   │   ├── core                    <- PyTorch Lightning models
│   │   ├── datasets                <- PyTorch datasets
│   │   ├── losses                  <- PyTorch losses
│   │   ├── metrics                 <- PyTorch metrics  
│   │   ├── models                  <- PyTorch model architectures
│   │   ├── optimizers              <- PyTorch optimizers
│   │   ├── readers                 <- Data readers
│   │   ├── samples                 <- PyTorch samplers
│   │   ├── schedulers              <- PyTorch schedulers
│   │   └── utils
├── tests
│   ├── test_fulmo                  <- Tests
│
├── .bumpversion.cfg
├── .darglint
├── .gitignore
├── .pre-commit-config.yaml <- Configuration of hooks for automatic code formatting
├── CHANGELOG.md
├── mypy.ini
├── noxfile.py
├── poetry.lock             <- File for installing python dependencies
├── pyproject.toml          <- File for installing python dependencies
├── README.md
└── tasks.py

Workflow

  1. Write your PyTorch model
  2. Write your PyTorch Lightning datamodule
  3. Write your experiment config, containing paths to your model and datamodule
  4. Run training with chosen experiment config:
python train.py +experiment=experiment_name

Experiment Tracking

PyTorch Lightning provides built in loggers for Weights&Biases, Neptune, Comet, MLFlow, Tensorboard and CSV. To use one of them, simply add its config to configs/logger and run:

python train.py logger=logger_name

Quickstart

First, install dependencies
pip install fulmo | poetry add fulmo
Second, create your project

See examples folder.

Next, you can train model with default configuration without logging
python train.py
Or you can train model with chosen experiment config
python train.py +experiment=experiment_name
Resume from a checkpoint
# checkpoint can be either path or URL
# path should be either absolute or prefixed with `${work_dir}/`
# use quotes '' around argument or otherwise $ symbol breaks it
python train.py '+trainer.resume_from_checkpoint=${work_dir}/logs/runs/2021-06-23/16-50-49/checkpoints/last.ckpt'

TODO


Credits

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fulmo-1.0.0.tar.gz (44.5 kB view details)

Uploaded Source

File details

Details for the file fulmo-1.0.0.tar.gz.

File metadata

  • Download URL: fulmo-1.0.0.tar.gz
  • Upload date:
  • Size: 44.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.11 CPython/3.8.10 Linux/5.11.0-1021-azure

File hashes

Hashes for fulmo-1.0.0.tar.gz
Algorithm Hash digest
SHA256 6b926ac08f31836da2043f27a4c0c08d4cd89a4cf8e74913c6e27d9f7a600052
MD5 8d94b8e9bfacb29f7dce8376505770b0
BLAKE2b-256 c8a3e0006f3c92f986964f630644db8f937696771d5fd1f26f7c40c28b6806c3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page