Synaptic OPerations (SyOPs) counter for spiking neural networks
Project description
Synaptic OPerations (SyOPs) counter for spiking neural networks
This script is designed to compute the theoretical amount of synaptic operations in spiking neural networks, including accumulated (AC) and multiply-accumulate (MAC) operations. It can also compute the number of parameters and print per-layer computational cost of a given network. This tool is still under construction. Comments, issues, contributions, and collaborations are all welcomed!
Supported layers:
- Conv1d/2d/3d (including grouping)
- ConvTranspose1d/2d/3d (including grouping)
- BatchNorm1d/2d/3d, GroupNorm, InstanceNorm1d/2d/3d
- Activations (ReLU, PReLU, ELU, ReLU6, LeakyReLU, GELU)
- Linear
- Upsample
- Poolings (AvgPool1d/2d/3d, MaxPool1d/2d/3d and adaptive ones)
- LF/LIF/PLIF (spikingjelly)
Experimental support:
- RNN, LSTM, GRU (NLH layout is assumed)
- RNNCell, LSTMCell, GRUCell
- MultiheadAttention
Requirements: Pytorch >= 1.1, torchvision >= 0.3, spikingjelly<=0.0.0.0.12
Usage
- This script doesn't take into account
torch.nn.functional.*operations. For an instance, if one have a semantic segmentation model and usetorch.nn.functional.interpolateto upscale features, these operations won't contribute to overall amount of flops. To avoid that one can usetorch.nn.Upsampleinstead oftorch.nn.functional.interpolate. syopslaunches a given model on a random tensor or aDataLoaderand estimates amount of computations during inference. Complicated models can have several inputs, some of them could be optional.- To construct non-trivial input one can use the
input_constructorargument of theget_model_complexity_info.input_constructoris a function that takes the input spatial resolution as a tuple and returns a dict with named input arguments of the model. Next this dict would be passed to the model as a keyword arguments. - To construct a
DataLoaderinput one can use thedataLoaderargument of theget_model_complexity_infobased ontorch.utils.data.DataLoader. The number of computations would be estimated based on the input fire rate of spike signals.
- To construct non-trivial input one can use the
verboseparameter allows to get information about modules that don't contribute to the final numbers.ignore_modulesoption forcessyopsto ignore the listed modules. This can be useful for research purposes. For an instance, one can drop all batch normalization from the counting process specifyingignore_modules=[torch.nn.BatchNorm2d].
Install the latest version
From PyPI:
pip install syops
From this repository:
pip install --upgrade git+https://github.com/iCGY96/syops-counter
Example
import torch
from spikingjelly.activation_based import surrogate, neuron, functional
from spikingjelly.activation_based.model import spiking_resnet
from syops import get_model_complexity_info
dataloader = ...
with torch.cuda.device(0):
net = spiking_resnet.spiking_resnet18(pretrained=True, spiking_neuron=neuron.IFNode,
surrogate_function=surrogate.ATan(), detach_reset=True)
ops, params = get_model_complexity_info(net, (3, 224, 224), dataloader, as_strings=True,
print_per_layer_stat=True, verbose=True)
print('{:<30} {:<8}'.format('Computational complexity ACs:', acs))
print('{:<30} {:<8}'.format('Computational complexity MACs:', macs))
print('{:<30} {:<8}'.format('Number of parameters: ', params))
Benchmark
| Model | Input Resolution | Params(M) | ACs(G) | MACs(G) | Energy (mJ) | Acc@1 | Acc@5 |
|---|---|---|---|---|---|---|---|
| spiking_resnet18 | 224x224 | 11.69 | 0.10 | 0.14 | 0.734 | 62.32 | 84.05 |
| sew_resnet18 | 224x224 | 11.69 | 0.50 | 2.75 | 13.10 | 63.18 | 84.53 |
| DSNN18 (AAP) | 224x224 | 11.69 | 1.69 | 0.20 | 2.44 | 63.46 | 85.14 |
| resnet18 | 224x224 | 11.69 | 0.00 | 1.82 | 8.372 | 69.76 | 89.08 |
- ACs(G) - The theoretical amount of accumulated operations based on spike signals.
- MACs(G) - The theoretical amount of multiply-accumulate operations based on non-spike signals.
- Energy(mJ) - Energy consumption is based on 45nm technology, where AC cost 0.9pJ and MAC cost 4.6pJ.
- Acc@1 - ImageNet single-crop top-1 accuracy on validation images of the same size used during the training process.
- Acc@5 - ImageNet single-crop top-5 accuracy on validation images of the same size used during the training process.
Acknowledgements
This repository is developed based on ptflops
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file syops-0.0.5.tar.gz.
File metadata
- Download URL: syops-0.0.5.tar.gz
- Upload date:
- Size: 12.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.9.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
00b5530d0e009a35df0668b760d390168b588e469a5a1032aa718058bbbd2b4b
|
|
| MD5 |
5bf8e3f31eba2825b735eee691b7c699
|
|
| BLAKE2b-256 |
a61358a87929cfd773af41befd3e2eca327b14d80b1e78730bc4c505b9e640c3
|
File details
Details for the file syops-0.0.5-py3-none-any.whl.
File metadata
- Download URL: syops-0.0.5-py3-none-any.whl
- Upload date:
- Size: 11.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.9.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d3d1e0a2bf303907cf2983adbc73f87948f6c39ec4a6e470fa630550473e40d2
|
|
| MD5 |
bef5058c8a98a864630b91988e51aae3
|
|
| BLAKE2b-256 |
30d7ff58b08e299db0df20987dc77ff14e1b4dfc1fe24b754abff3b5e21f3469
|