A CNN visualization library for pytorch
Project description
Introduction
CNNs do a great job at recognizing images (when appropriately trained).Problems arise when it comes to interpret the network: although one coded the network in question and knows all the tips and tricks necessary to train it efficiently, one might ignore how it generates the output from a given image.
Torchlurk aims at helping the user in that sense: it provides an interface to visualize a Pytorch network in an efficient yet simple manner, similarly to Microscope.
All you need is the trained Pytorch network and its training set. That's it.
Installation ☕
Torchlurk is available on pip! Just run:pip install torchlurk
Overview ☝
Documentation 📚
Torchlurk has an online documentation which gets regularly updated.Quick Start ⌛
Your training set should follow the following structure in order for the lurker to load properly your datas:
.
├── src
│ ├── name_class1
│ │ ├── class1id_1.jpg
│ │ ├── class1id_2.jpg
│ │ ├── ...
│ ├── name_class2
│ │ ├── class2id_1.jpg
│ │ ├── class2id_2.jpg
│ │ ├── ...
│ ├── ...
1. Instanciation
import torchlurk
import torch
# load the trained model
your_model = ModelClass()
your_model.load_state_dict(torch.load(PATH))
# the preprocess used for the training
preprocess = transforms.Compose(...)
# and instanciate a lurker
lurker = Lurk(your_model,
preprocess,
save_gen_imgs_dir='save/dir',
save_json_path='save/dir',
imgs_src_dir=".source/dir",
side_size=224)
2. Layer Visualization
The layer visualization is an artificial image generated by gradient descent which aims at maximizing the acivation of a given filter: this gives useful insights on the type of texture/colors the filter in question is looking at in inputs images.
# compute the layer visualisation for a given set of layers/filters
lurker.compute_layer_viz(layer_indx = 12,filter_indexes=[7])
# OR compute it for the whole network
lurker.compute_viz()
# plot the filters
lurker.plot_filter_viz(layer_indx=12,filt_indx=7)
3. Max Activation
The max activation represents the top N images activating a given filter the most in terms of average or max score.
#compute the top activation images
lurker.compute_top_imgs(compute_max=True,compute_avg=True)
# plot them
lurker.plot_top("avg",layer_indx=12,filt_indx=7)
3.1 Deconvolution
# plot the max activating images along with their cropped areas
lurker.plot_crop(layer_indx=2,filt_indx=15)
4. Gradients
Guided GRAD CAM is another way to check what a given filter is looking at: it relies in particular in isolating a specific location in the image that excites our neurons. For more information, check this article.
#compute the gradients
lurker.compute_grads()
# plot them
lurker.plot_top("avg",layer_indx=12,filt_indx=7,plot_imgs=False,plot_grads=True)
5. Histograms
Torchlurk allows you to visualize the most activating classes of the training using histograms: a very peaked distribution is often asssociated with a specialized filter.
# display the
lurker.plot_hist(layer_indx=12,filt_indx=7,hist_type="max",num_classes=12)
6. Serving
Torchlurk is equiped with a live update tool which allows you to visualize your computed results while coding.
#serve the application on port 5001
lurker.serve(port=5001)
lurker.end_serve()
Happy Lurking!
🕵
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for torchlurk-0.1.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 33d7bbabe7a614fba7b9b8fd4ced5038853c6b3ef89724d14ebb5688bd0f9b09 |
|
MD5 | 8d8e2299127f241131488261b471ecc4 |
|
BLAKE2b-256 | d316aa7bf552485e45d497d64bd21575989727e3cc543506cc7393d901948fa0 |