Structured Pruning Adapters for PyTorch
Project description
Structured Pruning Adapters for PyTorch
pip install structured-pruning-adapters
A happy mariage 👰♀️🤵♂️
Pruning is an effective method for reducing the size of neural networks. Besides reducing the parameter count, the process can accelerate inference as well. CPUs can handle sparse weights just fine, but GPUs need structured weights for an acceleration to happen. A structured approach to pruning i.e., removing network channels [paper] or blocks of weights [paper], generally yields speedups as well
+
Adapters [paper] have emerged as an alternative to fine-tuning, in which the prior network weights are unaltered, and a new set of adapters weights are added to the network to learn a specific task. Some types of adapters add new layers, others are fusible with existing weights and don't have a run-time overhead. When a single base-model is deployed with many specialised models, these structures can save a lot of parameters compared with full fine-tuning.
=
Structured Pruning Adapters are the offspring of Structured Pruning and Fusible Adapters, and can be used for Transfer Learning which has:
- ✅ Extremely few learned parameters (binary pruning mask + masked adapter weights) 👌
- ✅ Accelerated network inference 🏎💨
How to use this library
Use in conjunction with any Structured Pruning technique.
-
Install the library:
pip install structured-pruning-adapters
-
Replace Linear and Conv layers with an SP Adapter:
from torch.nn import Linear from sp_adapter import SPLoRA reg_lin = Linear(256, 512, bias=True) spa_lin = SPLoRA(reg_lin, rank=32) # Or replace all applicable layers in a network spa_net = SPLoRA(reg_net, rank=32)
-
Get pruned SP Adapter weights:
# Specify mask spa_lin.configure_parameter_read( adapter_weights_only=True, in_features_mask=torch.tensor([1, 0, ..., 1], dtype=torch.bool) out_features_mask=torch.tensor([0, 1, ..., 1], dtype=torch.bool), ) # 👆 masks are learned via your choice of Structured Pruning method # Read parameters as usual spa_lin.parameters() spa_lin.named_parameters() spa_lin.state_dict()
Demo
See also notebooks/demo.ipynb for a hands-on demo.
Structured Pruning Low-Rank Adapter (SPLoRA) for Channel Pruning
from sp_adapters import SPLoRA
Structured Pruning Low-rank PHM Adapter (SPLoPA) for Block Pruning
from sp_adapters import SPLoPA
Uses a variation on the Parameterized Hypercomplex Multiplication (PHM) layer [paper] with shared low-rank prototypes for block-sparse adaptation.
Citation
If you enjoy this work, please consider citing it
@article{hedegaard2022cotrans,
title={Structured Pruning Adapters},
author={Lukas Hedegaard, Aman Alok, Juby Jose, Alexandros Iosifidis},
journal={preprint, arXiv:TBD},
year={2022}
}
Acknowledgement
This work was done in conjunction with a research exchange at Cactus Communications 🌵.
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR) 🇪🇺.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for structured-pruning-adapters-0.5.2.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3f0570d7171eb8939752c8fe1196c31ed73d38d9b6e5fe225cf6a294673df7b7 |
|
MD5 | b8816214ca58931243fdb94a47247864 |
|
BLAKE2b-256 | 1b053e6aa9d5d4659a7cf74f6d7d18dcc7d23bb514356db7d7c86e39779b2939 |
Hashes for structured_pruning_adapters-0.5.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 829a05b4306f64bae81cb6ace704335c289bbcc3a4ea98450dda38f652a12078 |
|
MD5 | e65d3e193a215f98cc9e1927f136dc16 |
|
BLAKE2b-256 | 18d1fe41790927c637f2f6022b35a106f1e7970cdb7b9a60bef68b4ed0cb2f5d |