Neural network visualization toolkit for tf.keras
Project description
tf-keras-vis
tf-keras-vis is a visualization toolkit for debugging tf.keras
models in Tensorflow2.0+.
Currently supported methods for visualization include:
- Activation Maximization
- Class Activation Maps
- Saliency Maps
tf-keras-vis is designed to be light-weight, flexible and ease of use. All visualizations have the features as follows:
- Support N-dim image inputs, that's, not only support pictures but also such as 3D images.
- Support batchwise processing, so, be able to efficiently process multiple input images.
- Support the model that have either multiple inputs or multiple outputs, or both.
- Support Optimizers embedded in tf.keras to process Activation maximization.
Visualizations
Visualizing Dense Layer
Visualizing Convolutional Filer
GradCAM
The images above are generated by GradCAM++
.
Saliency Map
The images above are generated by SmoothGrad
.
Requirements
- Python 3.6-3.9
- tensorflow>=2.0.2
Installation
- PyPI
$ pip install tf-keras-vis tensorflow
- Docker (container that run Jupyter Notebook)
$ cd tf-keras-vis
$ docker build -t <TAG> -f dockerfiles/gpu.Dockerfile .
$ docker run --gpus all --privileged -itd -p 8888:8888 <TAG>
Or
$ docker run --gpus all --privileged -itd -p 8888:8888 keisen/tf-keras-vis:0.5.0-gpu
You can find other images at Docker Hub.
Usage
Please see below for details:
Getting Started Guides
[NOTE] If you have ever used keras-vis, you may feel that tf-keras-vis is similar with keras-vis. Actually tf-keras-vis derived from keras-vis, and both provided visualization methods are almost the same. But please note that tf-keras-vis APIs does NOT have compatibility with keras-vis.
Guides (ToDo)
- Visualizing multiple attention or activation images at once utilizing batch-system of model
- Define various score functions
- Visualizing attentions with multiple inputs models
- Visualizing attentions with multiple outputs models
- Advanced score functions
- Tuning Activation Maximization
- Visualizing attentions for N-dim image inputs
ToDo
- Guide documentations
- API documentations
- We're going to add some methods such as below.
- Deep Dream
- Style transfer
Known Issues
- With InceptionV3, ActivationMaximization doesn't work well, that's, it might generate meaninglessly blur image.
- With cascading model, Gradcam and Gradcam++ don't work well, that's, it might occur some error. So we recommend, in this case, to use FasterScoreCAM.
channels-first
models and data is unsupported.- With a
mixed-precision
model that has a layer which are set float32 dtype exlicitly, ActivationMaximization may raise a error. - With a
mixed-precision
model, Regurarization values that is calculated by ActivationMaximization may be NaN.
Use Cases
- chitra
- A Deep Learning Computer Vision library for easy data loading, model building and model interpretation with GradCAM/GradCAM++.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for tf_keras_vis-0.6.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2de3beb9b66d31b2ca57faa2fc109b434de1a932cba6141d9cb7b9d2ef653fd9 |
|
MD5 | 9980240765e12cf92983371bf9bf46b5 |
|
BLAKE2b-256 | f4e814bfbe5ca8e6b143b0cb53386fd58e536e132ae79bea1dc7125a6d2a532a |