Skip to main content

A lightweight library to help with training neural networks in PyTorch.

Project description

Ignite Logo

image image imageimage image image

image image image image image

image image image

image image image Twitter

TL;DR

Ignite is a high-level library to help with training neural networks in PyTorch:

  • ignite helps you write compact but full-featured training loops in a few lines of code
  • you get a training loop with metrics, early-stopping, model checkpointing and other features without the boilerplate

Below we show a side-by-side comparison of using pure pytorch and using ignite to create a training loop to train and validate your model with occasional checkpointing:

image

As you can see, the code is more concise and readable with ignite. Furthermore, adding additional metrics, or things like early stopping is a breeze in ignite, but can start to rapidly increase the complexity of your code when "rolling your own" training loop.

Table of Contents

Installation

From pip:

pip install pytorch-ignite

From conda:

conda install ignite -c pytorch

From source:

pip install git+https://github.com/pytorch/ignite

Nightly releases

From pip:

pip install --pre pytorch-ignite

From conda (this suggests to install pytorch nightly release instead of stable version as dependency):

conda install ignite -c pytorch-nightly

Why Ignite?

Ignite's high level of abstraction assumes less about the type of network (or networks) that you are training, and we require the user to define the closure to be run in the training and validation loop. This level of abstraction allows for a great deal more of flexibility, such as co-training multiple models (i.e. GANs) and computing/tracking multiple losses and metrics in your training loop.

Power of Events & Handlers

The cool thing with handlers is that they offer unparalleled flexibility (compared to say, callbacks). Handlers can be any function: e.g. lambda, simple function, class method etc. The first argument can be optionally engine, but not necessary. Thus, we do not require to inherit from an interface and override its abstract methods which could unnecessarily bulk up your code and its complexity.

Execute any number of functions whenever you wish

Examples
trainer.add_event_handler(Events.STARTED, lambda _: print("Start training"))

# attach handler with args, kwargs
mydata = [1, 2, 3, 4]
logger = ...

def on_training_ended(data):
    print("Training is ended. mydata={}".format(data))
    # User can use variables from another scope  
    logger.info("Training is ended")


trainer.add_event_handler(Events.COMPLETED, on_training_ended, mydata)
# call any number of functions on a single event
trainer.add_event_handler(Events.COMPLETED, lambda engine: print("OK"))

@trainer.on(Events.ITERATION_COMPLETED)
def log_something(engine):
    print(engine.state.output)

Built-in events filtering

Examples
# run the validation every 5 epochs
@trainer.on(Events.EPOCH_COMPLETED(every=5))
def run_validation():
    # run validation

# change some training variable once on 20th epoch
@trainer.on(Events.EPOCH_STARTED(once=20))
def change_training_variable():
    # ...

# Trigger handler with customly defined frequency
@trainer.on(Events.ITERATION_COMPLETED(event_filter=first_x_iters))
def log_gradients():
    # ...

Stack events to share some actions

Examples

Events can be stacked together to enable multiple calls:

@trainer.on(Events.COMPLETED | Events.EPOCH_COMPLETED(every=10))
def run_validation():
    # ...

Custom events to go beyond standard events

Examples

Custom events related to backward and optimizer step calls:

class BackpropEvents(EventEnum):
    BACKWARD_STARTED = 'backward_started'
    BACKWARD_COMPLETED = 'backward_completed'
    OPTIM_STEP_COMPLETED = 'optim_step_completed'

def update(engine, batch):
    # ...
    loss = criterion(y_pred, y)
    engine.fire_event(BackpropEvents.BACKWARD_STARTED)
    loss.backward()
    engine.fire_event(BackpropEvents.BACKWARD_COMPLETED)
    optimizer.step()
    engine.fire_event(BackpropEvents.OPTIM_STEP_COMPLETED)
    # ...    

trainer = Engine(update)
trainer.register_events(*BackpropEvents)

@trainer.on(BackpropEvents.BACKWARD_STARTED)
def function_before_backprop(engine):
    # ...

Out-of-the-box metrics

precision = Precision(average=False)
recall = Recall(average=False)
F1_per_class = (precision * recall * 2 / (precision + recall))
F1_mean = F1_per_class.mean()  # torch mean method
F1_mean.attach(engine, "F1")

Documentation

Additional Materials

Structure

  • ignite: Core of the library, contains an engine for training and evaluating, all of the classic machine learning metrics and a variety of handlers to ease the pain of training and validation of neural networks!
  • ignite.contrib: The Contrib directory contains additional modules that can require extra dependencies. Modules vary from TBPTT engine, various optimisation parameter schedulers, logging handlers and a metrics module containing many regression metrics (ignite.contrib.metrics.regression)!

The code in ignite.contrib is not as fully maintained as the core part of the library.

Examples

We provide several examples ported from pytorch/examples using ignite to display how it helps to write compact and full-featured training loops in a few lines of code:

MNIST Example

Basic neural network training on MNIST dataset with/without ignite.contrib module:

Tutorials

Distributed CIFAR10 Example

Training a small variant of ResNet on CIFAR10 in various configurations: 1) single gpu, 2) single node multiple gpus, 3) multiple nodes and multilple gpus.

Other Examples

Reproducible Training Examples

Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:

Features:

Communication

User feedback

We have created a form for "user feedback". We appreciate any type of feedback and this is how we would like to see our community:

  • If you like the project and want to say thanks, this the right place.
  • If you do not like something, please, share it with us and we can see how to improve it.

Thank you !

Contributing

Please see the contribution guidelines for more information.

As always, PRs are welcome :)

Projects using Ignite

Research papers

Blog articles, tutorials, books

Toolkits

Others

See other projects at "Used by"

If your project implements a paper, represents other use-cases not covered in our official tutorials, Kaggle competition's code or just your code presents interesting results and uses Ignite. We would like to add your project in this list, so please send a PR with brief description of the project.

About the team

Project is currently maintained by a team of volunteers. See the "About us" page for a list of core contributors.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytorch-ignite-0.4.0.dev20200526.tar.gz (103.8 kB view details)

Uploaded Source

Built Distribution

pytorch_ignite-0.4.0.dev20200526-py2.py3-none-any.whl (144.8 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file pytorch-ignite-0.4.0.dev20200526.tar.gz.

File metadata

  • Download URL: pytorch-ignite-0.4.0.dev20200526.tar.gz
  • Upload date:
  • Size: 103.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.7.7

File hashes

Hashes for pytorch-ignite-0.4.0.dev20200526.tar.gz
Algorithm Hash digest
SHA256 0c2bc6417c6204fe53e0e688fc3c6aee4a2716d8782a0d5432add9a9fd4b2388
MD5 6c2f27509cbd0ce55bbc655210176938
BLAKE2b-256 a462b12e8d0a668dcca8488b2b96218a41b6076f4df150bc3cf589e21f71b3eb

See more details on using hashes here.

File details

Details for the file pytorch_ignite-0.4.0.dev20200526-py2.py3-none-any.whl.

File metadata

  • Download URL: pytorch_ignite-0.4.0.dev20200526-py2.py3-none-any.whl
  • Upload date:
  • Size: 144.8 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.7.7

File hashes

Hashes for pytorch_ignite-0.4.0.dev20200526-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 bb7d5382c8ea18159cefc6ee10274762b97d5b4a2f9b2443988c6464ec28018f
MD5 b5d041ac335f6c124b4b7ca882c998a9
BLAKE2b-256 73f7cfd26f8a59fb13e5137a99a43849a62c97b36bfa5f013cff283f70456c28

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page