A package for gradient-free neural network training using LFP
Project description
Gradient-free Neural Network Training based on Layer-wise Relevance Propagation (LRP)
:octopus: Flexibility
LFP is highly flexible w.r.t. the models and objective functions it can be used with, as it does not require differentiability. Consequently, it can be applied in non-differentiable architectures (e.g., Spiking Neural Networks) without requiring further adaptations, and naturally handles discrete objectives, such as feedback directly obtained from humans.
:gear: Efficiency
LFP applies an implicit weight-scaling of updates and only propagates feedback through nonzero connections and activations. This leads to sparsity of updates and the final model, while not sacrificing performance or convergence speed meaningfully compared to gradient descent. The obtained models can be pruned more easily since they represent information more efficiently.
:page_with_curl: Paper
For more details, refer to our Paper.
If you use this package in your research, please cite
@article{weber2025efficient,
author={Leander Weber and Jim Berend and Moritz Weckbecker and Alexander Binder and Thomas Wiegand and Wojciech Samek and Sebastian Lapuschkin},
title={Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation},
journal={CoRR},
volume={abs/2308.12053},
year={2025},
url={https://arxiv.org/abs/2308.12053},
eprinttype={arXiv},
eprint={2308.12053},
archivePrefix={arXiv},
}
:scroll: License
This project is licensed under the BSD-3 Clause License, since LRP (which LFP is based on) is a patented technology that can only be used free of charge for personal and scientific purposes.
:rocket: Getting Started
:fire: Installation
Using PyPI (Recommended)
LFP is available from PyPI, and we recommend this installation if you simply want to use LFP or run any of the notebooks or experiments in this repository.
pip install lfprop
If you would like to check out the minimal example.ipynb notebook, first clone the repository, and then install the necessary dependencies:
git clone https://github.com/leanderweber/layerwise-feedback-propagation
cd layerwise-feedback-propagation
pip install lfprop[quickstart]
Similarly, if you would like to run the scripts and notebooks for reproducing the paper experiments, you can run
git clone https://github.com/leanderweber/layerwise-feedback-propagation
cd layerwise-feedback-propagation
pip install lfprop[full]
instead to install the full dependencies.
Using Poetry
If you would like to contribute to the repository, or extend the code in some way, we recommend the installation via Poetry:
git clone https://github.com/leanderweber/layerwise-feedback-propagation
cd layerwise-feedback-propagation
poetry install
This requires poetry-core>=2.0.0,<3.0.0.
:brain: How it works
Our implementation of LFP is based the LRP-implementation of the zennit and LXT libraries. Both of these libraries are based on PyTorch and modify the backward pass to return relevances instead of gradients.
lfprop extends these libraries to return relevances not only w.r.t. activations, but also w.r.t. parameters. Similar to LXT and zennit, this requires registering a composite to modify the backward pass.
LXT Backend
from lfprop.propagation import propagator_lxt
propagation_composite = propagator.LFPEpsilonComposite()
Zennit Backend
from lfprop.propagation import propagator_zennit
propagation_composite = propagator.LFPEpsilonComposite()
SNNs (Also Zennit Backend)
from lfprop.propagation import propagator_snn
propagation_composite = propagator.LRPRewardPropagator()
Instead of an initial relevance, LFP requires an initial reward at the output, to be decomposed throughout the model. We implement several reward functions, with a similar signature to torch.nn.Loss functions.
from lfprop.rewards import reward_functions as rewards
reward_func = rewards.SoftmaxLossReward(device)
To apply the modified backward pass, the composite simply needs to be registered.
After the backward pass is finished, the computed LFP-feedback can then be accessed via the (newly added) .feedback attribute of each parameter.
The model can simply be optimized using any torch.nn.Optimizer, by first overwriting the .grad attribute by the corresponding (negative) feedback.
This results in the following training step:
optimizer = torch.optim.SGD(model.parameters(), lr=lr, momentum=momentum)
optimizer.zero_grad()
with propagation_composite.context(model) as modified:
inputs = inputs.detach().requires_grad_(True)
outputs = modified(inputs)
# Calculate reward
reward = torch.from_numpy(
reward_func(outputs, labels).detach().cpu().numpy()
).to(device)
# Calculate LFP
input_reward = torch.autograd.grad(
(outputs,), (inputs,), grad_outputs=(reward,), retain_graph=False
)[0]
# Write LFP Values into .grad attributes.
for name, param in model.named_parameters():
param.grad = -param.feedback
# Optimization step
optimizer.step()
:mag: Examples
A simple, full example of how to train a LeNet model on MNIST can be found under minimal_example.ipynb. An example using SNNs can be found under minimal_example_spiking_nets.ipynb.Note that to run these notebooks, you need to install the necessary dependencies using lfprop[quickstart], as described under Installation.
:test_tube: Reproducing Experiments
To reproduce experiments from the paper, you first need to install the necessary dependencies with lfprop[full], as described under Installation.
Most toy data experiments can then be reproduced by simply running the corresponding notebooks under nbs/. You can find the used hyperparameters for the notebooks within the first two cells.
For reproducing the experiments that require training on more complex data and models (LFP for Non-ReLU and Pruning experiments), the training script is implemented in run_experiment.py.
Hyperparameters for these experiments can be generated using the scripts under configs/.
For reproducing the Non-SNN experiments first run
# 1. generate the config files
python configs/<experimentname>/config_generator<somesuffix>.py
# 2. run training script
python run_experiment.py --config_file=configs/<experimentname>/cluster/<selected-config-name>
For the pruning experiments, you can then run the nbs/*eval-clusterresults-pruning.ipynb notebooks using the obtained models.
For reproducing the SNN experiments run
# 1. generate the config files
python configs/spiking_neural_networks/config_generator_mnist_training.py
# 2. run training script
python run_snn_experiment.py --config_file=configs/spiking_neural_networks/cluster/<selected-config-name>
:bell: Roadmap
This is a first release of LFP, which does not work with all types of data or models, but we are actively working on extending the package. You can check this Roadmap to get an overview over features planned to the future.
- LFP for CNNs and Fully-Connected Models
- LFP for SNNs
- LFP for Transformers
- LFP for Classification Tasks
- LFP for Non-Classification Tasks
:pencil2: Contributing
Feel free to contribute to the code, experiment with different models and datasets, and raise any suggestions or encountered problems as Issues or create a Pull Request.
For contributing, we recommend the Installation via Poetry.
Note that we use Ruff for formatting and linting.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file lfprop-0.1.1.tar.gz.
File metadata
- Download URL: lfprop-0.1.1.tar.gz
- Upload date:
- Size: 25.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.5 CPython/3.8.10 Linux/5.15.0-131-generic
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f4ac60624c979e8c95cf996918c2c00a68f09f00b70ac85f1331698be8485894
|
|
| MD5 |
332992a4948412dce55a885006277f0a
|
|
| BLAKE2b-256 |
3a2e5ffd2bb586c0d9a0acef95e5224520d4171ac6cb36a78ee67deb1d44ff3e
|
File details
Details for the file lfprop-0.1.1-py3-none-any.whl.
File metadata
- Download URL: lfprop-0.1.1-py3-none-any.whl
- Upload date:
- Size: 26.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.5 CPython/3.8.10 Linux/5.15.0-131-generic
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
10ddd1a4aba958c4863939ea40b9d3b74a60f3d08b92f9809bf8505969e5a797
|
|
| MD5 |
df884941175b405331c249dd03120358
|
|
| BLAKE2b-256 |
418dbb0ab90c4015e5704f9cc60aaeb65afff8d82be46088ed564ff4482d7167
|