Skip to main content

A Python library that analyzes the layer sustainability of neural networks

Project description

Layers Sustainability Analysis framework (LSA)

[Presentation] [Pytorch code] [Paper]

image info

LSA stands for Layer Sustainability Analysis for the analysis of layer vulnerability in a given neural network. LSA can be a helpful toolkit to assess deep neural networks and to extend the adversarial training approaches towards improving the sustainability of model layers via layer monitoring and analysis. The LSA framework identifies a list of Most Vulnerable Layers (MVL list) of a given network. The relative error, as a comparison measure, is used to evaluate representation sustainability of each layer against adversarial attack inputs.

License: MIT PyPI - Python Version Documentation Status Release Status

Overview

Sustainability and vulnerability in different domains have many definitions. In our case, the focus is on certain vulnerabilities that fool deep learning models in the feed-forward propagation approach. One main concentration is therefore on the analysis of forwarding vulnerability effects of deep neural networks in the adversarial domain. Analyzing the vulnerabilities of deep neural networks helps better understand different behaviors in dealing with input perturbations in order to attain more robust and sustainable models.

image info

Analyzing the vulnerabilities of deep neural networks helps better understand different behaviors in dealing with input perturbations in order to attain more robust and sustainable models. One of the fundamental mathematical concepts that comes to mind in the sustainability analysis approach is Lipchitz continuity which grants deeper insight into the sustainability analysis of neural network models by approaching LR from the Lipschitz continuity perspective.

Table of Contents

  1. Requirements and Installation
  2. Getting Started
  3. Usage
  4. Citation
  5. Contribution

Requirements and Installation

:clipboard: Requirements

  • PyTorch version >=1.6.0
  • Python version >=3.6

:hammer: Installation

pip install layersSustainabilityAnalysis

Getting Started

:warning: Precautions

  • The LSA framework could be applied to any neural network architecture with no limitations.
    • random_seed = 313 to get same training procedures. Some operations are non-deterministic with float tensors on GPU [discuss].
  • Also, torch.backends.cudnn.deterministic = True to get same adversarial examples with fixed random seed.
  • LSA uses a hook to represent each layer of the neural network. Thus, you can change its probs (checker positions). Activation functions such as ReLU and ELU are default probs.

:rocket: Demos

Given selected_clean_sample, selected_pertubed_sample and comparison measure are used in LSA:

from LayerSustainabilityAnalysis import LayerSustainabilityAnalysis as LSA

lsa = LSA(pretrained_model=model)

lst_comparison_measures = lsa.representation_comparisons(img_clean=selected_clean_sample, img_perturbed=selected_pertubed_sample, measure ='relative-error')

Usage

:white_check_mark: Neural network behavior analysis through feed-forward monitoring approach

The figure illustrates comparison measure values for representation tensors of layers, during which a trained model is fed both clean and corresponding adversarially or statistically perturbed samples. Fluctuation patterns of comparison measure values for each layer in the model also demonstrate the difference in layer behaviors for clean and corresponding perturbed input. As can be seen in different model architectures, adversarial perturbations are more potent and have higher comparison measure values than statistical ones. In fact, as the literature shows that adversarial attacks are near the worst-case perturbations. However, the relative error of PGD-based adversarial attacks is much higher than that of FGSM adversarial attacks in all experiments. Salt and Gaussian statistical perturbation (noise) also have a much higher relative error value than the other statistical perturbations.
image info Note that some layers are more vulnerable than others. In other words, some layers are able to sustain disruptions and focus on vital features, while others are not. Each layer in below figure is related to any of learnable convolutional or fully connected layers. image info

[To be completed ...]


:white_check_mark: Adversarial training using layer sustainability analysis

One of the incentives of introducing regularization terms in the loss function of deep neural networks is to restrict certain effective parameters. Researchers have attempted to discover effective parameters in several ways, but most approaches are not applicable to all networks. A new approach to perform an effective sensitivity analysis of different middle layers of a neural network and administer the vulnerability in the loss function is obtained from the layer sustainability analysis framework. The loss function of the network can be improved by including such regularization terms to reduce the vulnerability of middle layers.

As observed in above equations, the proposed LR term is added in order to define an extension on base adversarial training through an inner maximization and outer minimization optimization problem.

[To be completed ...]

Citation

If you use this package, please cite the following BibTex (SemanticScholar, GoogleScholar):

@article{Khalooei2022LayerwiseRA,
  title={Layer-wise Regularized Adversarial Training using Layers Sustainability Analysis (LSA) framework},
  author={Mohammad Khalooei and Mohammad Mehdi Homayounpour and Maryam Amirmazlaghani},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.02626}
}

Contribution

All kind of contributions are always welcome!

Please let me know if you are interested in adding a new comparison measure or feature map visualization to this repository or if you would like to fix some issues.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

LayersSustainabilityAnalysis-1.0.4.tar.gz (7.0 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file LayersSustainabilityAnalysis-1.0.4.tar.gz.

File metadata

  • Download URL: LayersSustainabilityAnalysis-1.0.4.tar.gz
  • Upload date:
  • Size: 7.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/0.23 pkginfo/1.8.2 requests/2.23.0 requests-toolbelt/0.9.1 tqdm/4.56.2 CPython/3.7.4

File hashes

Hashes for LayersSustainabilityAnalysis-1.0.4.tar.gz
Algorithm Hash digest
SHA256 20e62549a1fef3ace0accab73d59d5453e9ff4cf7571f7c00c73811a616eaf23
MD5 8268860c5ac9c0841bba3dd36eb3929b
BLAKE2b-256 aea6fbfdae00410e24b7178dcad1ed35ad703903cd5213e2a8d45cea40253378

See more details on using hashes here.

File details

Details for the file LayersSustainabilityAnalysis-1.0.4-py3-none-any.whl.

File metadata

  • Download URL: LayersSustainabilityAnalysis-1.0.4-py3-none-any.whl
  • Upload date:
  • Size: 7.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/0.23 pkginfo/1.8.2 requests/2.23.0 requests-toolbelt/0.9.1 tqdm/4.56.2 CPython/3.7.4

File hashes

Hashes for LayersSustainabilityAnalysis-1.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 c3aed57bf41db53957174b5b769256b1e97cdaec0c09c30963c4565200ad6602
MD5 11317720de497c48e07e02c277323ba5
BLAKE2b-256 675397b3b20b873f8675a0fdbf8856bdbb874983e9cb9d1a406420f2089b6da2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page