Skip to main content

Neural network visualization toolkit for tf.keras

Project description

tf-keras-vis

Downloads Python PyPI version Python package License: MIT Documentation

Note!

We've released v0.7.0! In this release, the gradient calculation of ActivationMaximization is changed for the sake of fixing a critical problem. Although the calculation result are now a bit different compared to the past versions, you could avoid it by using legacy implementation as follows:

# from tf_keras_vis.activation_maximization import ActivationMaximization
from tf_keras_vis.activation_maximization.legacy import ActivationMaximization

In addition to above, we've also fixed some problems related Regularizers. Although we newly provide tf_keras_vis.activation_maximization.regularizers module that includes the regularizers whose bugs are fixed, like ActivationMaximization, you could also use legacy implementation as follows:

# from tf_keras_vis.activation_maximization.regularizers import Norm, TotalVariation2D 
from tf_keras_vis.utils.regularizers import Norm, TotalVariation2D

Please see the release note for details. If you face any problem related to this release, please feel free to ask us in Issues page!

Web documents

https://keisen.github.io/tf-keras-vis-docs/

Overview

tf-keras-vis is a visualization toolkit for debugging tf.keras.Model in Tensorflow2.0+. Currently supported methods for visualization include:

tf-keras-vis is designed to be light-weight, flexible and ease of use. All visualizations have the features as follows:

  • Support N-dim image inputs, that's, not only support pictures but also such as 3D images.
  • Support batch wise processing, so, be able to efficiently process multiple input images.
  • Support the model that have either multiple inputs or multiple outputs, or both.
  • Support the mixed-precision model.

And in ActivationMaximization,

  • Support Optimizers that are built to tf.keras.

Visualizations

Dense Unit

Convolutional Filter

Class Activation Map

The images above are generated by GradCAM++.

Saliency Map

The images above are generated by SmoothGrad.

Usage

ActivationMaximization (Visualizing Convolutional Filter)

import tensorflow as tf
from tensorflow.keras.applications import VGG16
from matplotlib import pyplot as plt
from tf_keras_vis.activation_maximization import ActivationMaximization
from tf_keras_vis.activation_maximization.callbacks import Progress
from tf_keras_vis.activation_maximization.input_modifiers import Jitter, Rotate2D
from tf_keras_vis.activation_maximization.regularizers import TotalVariation2D, Norm
from tf_keras_vis.utils.model_modifiers import ExtractIntermediateLayer, ReplaceToLinear
from tf_keras_vis.utils.scores import CategoricalScore

# Create the visualization instance.
# All visualization classes accept a model and model-modifier, which, for example,
#     replaces the activation of last layer to linear function so on, in constructor.
activation_maximization = \
   ActivationMaximization(VGG16(),
                          model_modifier=[ExtractIntermediateLayer('block5_conv3'),
                                          ReplaceToLinear()],
                          clone=False)

# You can use Score class to specify visualizing target you want.
# And add regularizers or input-modifiers as needed.
activations = \
   activation_maximization(CategoricalScore(FILTER_INDEX),
                           steps=200,
                           input_modifiers=[Jitter(jitter=16), Rotate2D(degree=1)],
                           regularizers=[TotalVariation2D(weight=1.0),
                                         Norm(weight=0.3, p=1)],
                           optimizer=tf.keras.optimizers.RMSprop(1.0, 0.999),
                           callbacks=[Progress()])

## Since v0.6.0, calling `astype()` is NOT necessary.
# activations = activations[0].astype(np.uint8)

# Render
plt.imshow(activations[0])

Gradcam++

import numpy as np
from matplotlib import pyplot as plt
from matplotlib import cm
from tf_keras_vis.gradcam_plus_plus import GradcamPlusPlus
from tf_keras_vis.utils.model_modifiers import ReplaceToLinear
from tf_keras_vis.utils.scores import CategoricalScore

# Create GradCAM++ object
gradcam = GradcamPlusPlus(YOUR_MODEL_INSTANCE,
                          model_modifier=ReplaceToLinear(),
                          clone=True)

# Generate cam with GradCAM++
cam = gradcam(CategoricalScore(CATEGORICAL_INDEX),
              SEED_INPUT)

## Since v0.6.0, calling `normalize()` is NOT necessary.
# cam = normalize(cam)

plt.imshow(SEED_INPUT_IMAGE)
heatmap = np.uint8(cm.jet(cam[0])[..., :3] * 255)
plt.imshow(heatmap, cmap='jet', alpha=0.5) # overlay

Please see the guides below for more details:

Getting Started Guides

[NOTES] If you have ever used keras-vis, you may feel that tf-keras-vis is similar with keras-vis. Actually tf-keras-vis derived from keras-vis, and both provided visualization methods are almost the same. But please notice that tf-keras-vis APIs does NOT have compatibility with keras-vis.

Requirements

  • Python 3.7-3.10
  • tensorflow>=2.0.4

Installation

  • PyPI
$ pip install tf-keras-vis tensorflow
  • Source (for development)
$ git clone https://github.com/keisen/tf-keras-vis.git
$ cd tf-keras-vis
$ pip install -e .[develop] tensorflow

Use Cases

  • chitra
    • A Deep Learning Computer Vision library for easy data loading, model building and model interpretation with GradCAM/GradCAM++.

Known Issues

  • With InceptionV3, ActivationMaximization doesn't work well, that's, it might generate meaninglessly blur image.
  • With cascading model, Gradcam and Gradcam++ don't work well, that's, it might occur some error. So we recommend to use FasterScoreCAM in this case.
  • channels-first models and data is unsupported.

ToDo

  • Guides
    • Visualizing multiple attention or activation images at once utilizing batch-system of model
    • Define various score functions
    • Visualizing attentions with multiple inputs models
    • Visualizing attentions with multiple outputs models
    • Advanced score functions
    • Tuning Activation Maximization
    • Visualizing attentions for N-dim image inputs
  • We're going to add some methods such as below
    • Deep Dream
    • Style transfer

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tf-keras-vis-0.8.2.tar.gz (29.4 kB view details)

Uploaded Source

Built Distribution

tf_keras_vis-0.8.2-py3-none-any.whl (53.5 kB view details)

Uploaded Python 3

File details

Details for the file tf-keras-vis-0.8.2.tar.gz.

File metadata

  • Download URL: tf-keras-vis-0.8.2.tar.gz
  • Upload date:
  • Size: 29.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.13

File hashes

Hashes for tf-keras-vis-0.8.2.tar.gz
Algorithm Hash digest
SHA256 3bce92e413fb699eb473da6e2b456a37e16ce0bb86462d94fc20857b35e842ba
MD5 bf28c2ca64f26d237584290af62094fa
BLAKE2b-256 8af64dabe3d4282c4aa95ec6fc220ceacba80e40675e18cf3da92b65f9f3938d

See more details on using hashes here.

File details

Details for the file tf_keras_vis-0.8.2-py3-none-any.whl.

File metadata

  • Download URL: tf_keras_vis-0.8.2-py3-none-any.whl
  • Upload date:
  • Size: 53.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.13

File hashes

Hashes for tf_keras_vis-0.8.2-py3-none-any.whl
Algorithm Hash digest
SHA256 5ab5bdb079a95a30cfccc6a916df7031e4cdb81d6e17b515f8f32b06371db3f5
MD5 447c207eee27c55b91c374866ec546a7
BLAKE2b-256 d0222af12bfb77b21fac0fade5e314c018f01a17912542ccc4afdb3d3ba063f5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page