Skip to main content

A library of deep learning tools, which consists of lava.lib.dl.slayer and lava.lib.dl.netx for training and deployment of event-based deep neural networks on traditional as well as neuromorphic backends. Lava-DL is part of Lava Framework

Project description

Lava DL

lava-dl is a library of deep learning tools within Lava that support offline training, online training and inference methods for various Deep Event-Based Networks.

There are two main strategies for training Deep Event-Based Networks: direct training and ANN to SNN converison.

Directly training the network utilizes the information of precise timing of events. Direct training is very accurate and results in efficient networks. However, directly training networks take a lot of time and resources.

On the other hand, ANN to SNN conversion is especially suitable for rate coded SNNs where we can leverage fast training of ANNs. These converted SNNs, however, typically require increased latency compared to directly trained SNNs.

Lava-DL provides an improved version of SLAYER for direct training of deep event based networks and a new ANN-SNN accelerated training approach called Bootstrap to mitigate high latency issue of conventional ANN-SNN methods for training Deep Event-Based Networks.

The lava-dl training libraries are independent of the core lava library since Lava Processes cannot be trained directly at this point. Instead, lava-dl is first used to train the model which can then be converted to a network of Lava processes using the netx library using platform independent hdf5 network description.

The library presently consists of

  1. lava.lib.dl.slayer for natively training Deep Event-Based Networks.
  2. lava.lib.dl.bootstrap for training rate coded SNNs.
  3. lava.lib.dl.netx for training and deployment of event-based deep neural networks on traditional as well as neuromorphic backends.

Lava-dl also has the following external, fully compatible, plugin.

  1. lava.lib.dl.decolle for training Deep SNNs with local learning and surrogate gradients. This extension is an implementation of DECOLLE learning repo to be fully compatible to lava-dl training tools. Refer here for the detailed description of the extension, examples and tutorials.

    J. Kaiser, H. Mostafa, and E. Neftci, Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE). pp 424, Frontiers in Neuroscience 2020.

More tools will be added in the future.

Lava-DL Workflow

Drawing

Typical Lava-DL workflow:

  • Training: using lava.lib.dl.{slayer/bootstrap/decolle} which results in a hdf5 network description. Training usually follows an iterative cycle of architecture design, hyperparameter tuning, and backpropagation training.
  • Inference: using lava.lib.dl.netx which generates lava proces from the hdf5 network description of the trained network and enables inference on different backends.

Installation

Cloning Lava-DL and Running from Source

Note: The instructions to follow will setup a virtual environement and install lava-dl and all dependencies in that virtual environment. Please setup git-lfs to ensure large files are pulled during the git clone.

[Linux/MacOS]

cd $HOME
git clone git@github.com:lava-nc/lava-dl.git
cd lava-dl
curl -sSL https://install.python-poetry.org | python3 -
poetry config virtualenvs.in-project true
poetry install
source .venv/bin/activate
pytest

[Windows]

# Commands using PowerShell
cd $HOME
git clone git@github.com:lava-nc/lava-dl.git
cd lava-dl
python3 -m venv .venv
.venv\Scripts\activate
curl -sSL https://install.python-poetry.org | python3 -
pip install -U pip
poetry config virtualenvs.in-project true
poetry install
pytest

You should expect the following output after running the unit tests:

$ pytest
============================= test session starts ==============================
platform linux -- Python 3.9.10, pytest-7.0.1, pluggy-1.0.0
rootdir: /home/user/lava-dl, configfile: pyproject.toml, testpaths: tests
plugins: cov-3.0.0
collected 86 items

tests/lava/lib/dl/netx/test_blocks.py ...                                [  3%]
tests/lava/lib/dl/netx/test_hdf5.py ...                                  [  6%]
tests/lava/lib/dl/slayer/neuron/test_adrf.py .......                     [ 15%]

...... pytest output ...

tests/lava/lib/dl/slayer/neuron/dynamics/test_adaptive_threshold.py .... [ 80%]
.                                                                        [ 81%]
tests/lava/lib/dl/slayer/neuron/dynamics/test_leaky_integrator.py .....  [ 87%]
tests/lava/lib/dl/slayer/neuron/dynamics/test_resonator.py .....         [ 93%]
tests/lava/lib/dl/slayer/utils/filter/test_conv_filter.py ..             [ 95%]
tests/lava/lib/dl/slayer/utils/time/test_replicate.py .                  [ 96%]
tests/lava/lib/dl/slayer/utils/time/test_shift.py ...                    [100%]

=============================== warnings summary ===============================

...... pytest output ...

src/lava/lib/dl/slayer/utils/time/__init__.py                      4      0   100%
src/lava/lib/dl/slayer/utils/time/replicate.py                     6      0   100%
src/lava/lib/dl/slayer/utils/time/shift.py                        59     16    73%   22-43, 50, 55, 75, 121, 128, 135, 139
src/lava/lib/dl/slayer/utils/utils.py                             13      8    38%   14, 35-45
--------------------------------------------------------------------------------------------
TOTAL                                                           4782   2535    47%

Required test coverage of 45.0% reached. Total coverage: 46.99%
======================= 86 passed, 3 warnings in 46.56s ========================

Note: If you see errors regarding *.np files or errors similar to "ValueError: Cannot load file containing pickled data when allow_pickle=False" please ensure git-lfs is installed. If you installed git-lfs after cloning the repository please fetch and pull, git lfs fetch --all; git lfs pull and try the tests again.

[Alternative] Installing Lava via Conda

If you use the Conda package manager, you can simply install the Lava package via:

conda install lava-dl -c conda-forge

Alternatively with intel numpy and scipy:

conda create -n lava-dl python=3.9 -c intel
conda activate lava-dl
conda install -n lava-dl -c intel numpy scipy
conda install -n lava-dl -c conda-forge lava-dl --freeze-installed

[Alternative] Installing Lava from Binaries

If you only need the lava-dl package in your python environment, we will publish Lava releases via GitHub Releases. Please download the package and install it.

Open a python terminal and run:

[Windows/MacOS/Linux]

$ python3 -m venv python3_venv
$ pip install -U pip
$ pip install lava-dl-0.2.0.tar.gz

Getting Started

End to end training tutorials

Deep dive training tutorials

Inference tutorials

lava.lib.dl.slayer

lava.lib.dl.slayer is an enhanced version of SLAYER. Most noteworthy enhancements are: support for recurrent network structures, a wider variety of neuron models and synaptic connections (a complete list of features is here). This version of SLAYER is built on top of the PyTorch deep learning framework, similar to its predecessor. For smooth integration with Lava, lava.lib.dl.slayer supports exporting trained models using the platform independent hdf5 network exchange format.

In future versions, SLAYER will get completely integrated into Lava to train Lava Processes directly. This will eliminate the need for explicitly exporting and importing the trained networks.

Example Code

Import modules

import lava.lib.dl.slayer as slayer

Network Description

# like any standard pyTorch network
class Network(torch.nn.Module):
    def __init__(self):
        ...
        self.blocks = torch.nn.ModuleList([# sequential network blocks
                slayer.block.sigma_delta.Input(sdnn_params),
                slayer.block.sigma_delta.Conv(sdnn_params,  3, 24, 3),
                slayer.block.sigma_delta.Conv(sdnn_params, 24, 36, 3),
                slayer.block.rf_iz.Conv(rf_params, 36, 64, 3, delay=True),
                slayer.block.rf_iz.Conv(sdnn_cnn_params, 64, 64, 3, delay=True),
                slayer.block.rf_iz.Flatten(),
                slayer.block.alif.Dense(alif_params, 64*40, 100, delay=True),
                slayer.block.cuba.Recurrent(cuba_params, 100, 50),
                slayer.block.cuba.KWTA(cuba_params, 50, 50, num_winners=5)
            ])

    def forward(self, x):
        for block in self.blocks:
            # forward computation is as simple as calling the blocks in a loop
            x = block(x)
        return x

    def export_hdf5(self, filename):
        # network export to hdf5 format
        h = h5py.File(filename, 'w')
        layer = h.create_group('layer')
        for i, b in enumerate(self.blocks):
            b.export_hdf5(layer.create_group(f'{i}'))

Training

net = Network()
assistant = slayer.utils.Assistant(net, error, optimizer, stats)
...
for epoch in range(epochs):
    for i, (input, ground_truth) in enumerate(train_loader):
        output = assistant.train(input, ground_truth)
        ...
    for i, (input, ground_truth) in enumerate(test_loader):
        output = assistant.test(input, ground_truth)
        ...

Export the network

net.export_hdf5('network.net')

lava.lib.dl.bootstrap

In general ANN-SNN conversion methods for rate based SNN result in high latency of the network during inference. This is because the rate interpretation of a spiking neuron using ReLU acitvation unit breaks down for short inference times. As a result, the network requires many time steps per sample to achieve adequate inference results.

lava.lib.dl.bootstrap enables rapid training of rate based SNNs by translating them to an equivalent dynamic ANN representation which leads to SNN performance close to the equivalent ANN and low latency inference. More details here. It also supports hybrid training with a mixed ANN-SNN network to minimize the ANN to SNN performance gap. This method is independent of the SNN model being used.

It has similar API as lava.lib.dl.slayer and supports exporting trained models using the platform independent hdf5 network exchange format.

Example Code

Import modules

import lava.lib.dl.bootstrap as bootstrap

Network Description

# like any standard pyTorch network
class Network(torch.nn.Module):
    def __init__(self):
        ...
        self.blocks = torch.nn.ModuleList([# sequential network blocks
                bootstrap.block.cuba.Input(sdnn_params),
                bootstrap.block.cuba.Conv(sdnn_params,  3, 24, 3),
                bootstrap.block.cuba.Conv(sdnn_params, 24, 36, 3),
                bootstrap.block.cuba.Conv(rf_params, 36, 64, 3),
                bootstrap.block.cuba.Conv(sdnn_cnn_params, 64, 64, 3),
                bootstrap.block.cuba.Flatten(),
                bootstrap.block.cuba.Dense(alif_params, 64*40, 100),
                bootstrap.block.cuba.Dense(cuba_params, 100, 10),
            ])

    def forward(self, x, mode):
        ...
        for block, m in zip(self.blocks, mode):
            x = block(x, mode=m)

        return x

    def export_hdf5(self, filename):
        # network export to hdf5 format
        h = h5py.File(filename, 'w')
        layer = h.create_group('layer')
        for i, b in enumerate(self.blocks):
            b.export_hdf5(layer.create_group(f'{i}'))

Training

net = Network()
scheduler = bootstrap.routine.Scheduler()
...
for epoch in range(epochs):
    for i, (input, ground_truth) in enumerate(train_loader):
        mode = scheduler.mode(epoch, i, net.training)
        output = net.forward(input, mode)
        ...
        loss.backward()
    for i, (input, ground_truth) in enumerate(test_loader):
        mode = scheduler.mode(epoch, i, net.training)
        output = net.forward(input, mode)
        ...

Export the network

net.export_hdf5('network.net')

lava.lib.dl.netx

For inference using Lava, lava.lib.dl.netx provides an automated API for loading SLAYER-trained models as Lava Processes, which can be directly run on a desired backend. lava.lib.dl.netx imports models saved via SLAYER using the hdf5 network exchange format. The details of hdf5 network description specification can be found here.

Example Code

Import modules

from lava.lib.dl.netx import hdf5

Load the trained network

# Import the model as a Lava Process
net = hdf5.Network(net_config='network.net')

Attach Processes for Input-Output interaction

from lava.proc import io

# Instantiate the processes
dataloader = io.dataloader.SpikeDataloader(dataset=test_set)
output_logger = io.sink.RingBuffer(shape=net.out_layer.shape, buffer=num_steps)
gt_logger = io.sink.RingBuffer(shape=(1,), buffer=num_steps)

# Connect the input to the network:
dataloader.ground_truth.connect(gt_logger.a_in)
dataloader.s_out.connect(net.in_layer.neuron.a_in)

# Connect network-output to the output process
net.out_layer.out.connect(output_logger.a_in)

Run the network

from lava.magma import run_configs as rcfg
from lava.magma import run_conditions as rcnd

net.run(condition=rcnd.RunSteps(total_run_time), run_cfg=rcfg.Loihi1SimCfg())
output = output_logger.data.get()
gts = gt_logger.data.get()
net.stop()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

lava_dl-0.6.0-py3-none-any.whl (68.5 MB view details)

Uploaded Python 3

File details

Details for the file lava_dl-0.6.0-py3-none-any.whl.

File metadata

  • Download URL: lava_dl-0.6.0-py3-none-any.whl
  • Upload date:
  • Size: 68.5 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.5

File hashes

Hashes for lava_dl-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bcb448f2bb85dc0da4b4dbbbae16543d2a978c1da7279acc522ad621fc68c437
MD5 12a283271f1d8b9d9a5ba2bb48086e9f
BLAKE2b-256 a9a05af9abb1d60468a146561dd3c08d8351513e9d4195a63be04ea02f1a09f3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page