Skip to main content

Computer vision focused utilities for the fastai2 libraries

Project description

FastAI2 Extensions

This library is a collection of utility functions for a variety of purposes that fit right into the fastai2 ecosystem. It's broadly divided into 3 modules -- interpret , augment , and inference .

Install

pip install fastai2_extensions

Interpretation

ClassificationInterpretationEx

Extends fastai's ClassificationInterpretation to plot model confidence and per-label accuracy bar graphs. It also adds some convenience to grab filenames based on these confidence levels.

This part of the library is currently suitable for Softmax classifiers only. Multilabel support will be added soon.

from fastai2.vision.all import *
from fastai2_extensions.interpret.all import *
learn = load_learner('/Users/rahulsomani/Desktop/shot-lighting-cast/fastai2-110-epoch-model.pkl')
interp = ClassificationInterpretationEx.from_learner(learn)
plt.style.use('ggplot')
interp.plot_accuracy()

png

interp.plot_label_confidence()

png

GradCam

The GradCam object takes in 3 args:

  • learn: a fastai Learner
  • fname: path to the image file to draw the heatcam over
  • labels: list of labels to draw the heatmap for. If None, draws for the highest predicted class

There's quite a few plotting options. For more options, see the docs.

import PIL
fname = '../assets/imgs/alice-in-wonderland.jpg'
PIL.Image.open(fname).resize((550,270))

png

gcam = GradCam(learn, fname, None)
gcam.plot(full_size=True, plot_original=True, figsize=(12,6))

png

gcam = GradCam(learn, fname, ['shot_lighting_cast_hard', 'shot_lighting_cast_soft'])
gcam.plot(full_size=False, plot_original=False, figsize=(12,4))

png

Comparing Multiple Models

compare_venn lets you compares 2 or models trained evaluated on the same dataset to inspect model agreement. If you only input 2 or 3 models, then you can also see Venn Diagrams for the same.

For simplicity, I'm using the same model here with smaller versions of the validation set to display this functionality.

interp1 = ClassificationInterpretationEx.from_learner(learn1)
interp2 = ClassificationInterpretationEx.from_learner(learn2)
interp3 = ClassificationInterpretationEx.from_learner(learn3)
interp1.compute_label_confidence()
interp2.compute_label_confidence()
interp3.compute_label_confidence()
%%capture
fig,common_labels = compare_venn(
    conf_level=(0,99),  interps=[interp1,interp2],
    mode='accurate',
    return_common=True, return_fig=True,
    set_color='tomato'
)
fig

png

%%capture
fig,common_labels = compare_venn(
    conf_level=(0,99),  interps=[interp1,interp2,interp3],
    mode='accurate',
    return_common=True, return_fig=True,
    set_color='tomato'
)
fig

png

Augmentation

ApplyPILFilter, not surprisingly, lets you apply one or more PIL.ImageFilters as a data augmentation.

There's also a convenience function read_lut which lets you read in a LUT file (commonly found with .cube extensions), and construct a PIL.ImageFilter.Color3dLUT to apply as a transform.

The idea place for this in a fastai2 pipeline is as an item_tfms as it's a lossless transform and can be done right after reading the image from disk. A full example is shown in the docs.

from fastai2_extensions.augment.pil_filters import *
lut   = read_lut('../assets/luts/2strip.cube')
fname = '../assets/imgs/office-standoff.png'

img_raw  = PILImage.create(fname)
img_filt = ApplyPILFilter(lut,p=1.0)(fname, split_idx=0)
%%capture
fig,ax = plt.subplots(nrows=1, ncols=2, figsize=(16,6))
show_tensor = lambda x,ax: ToTensor()(x).show(ctx=ax)

show_tensor(img_raw,ax[0])
show_tensor(img_filt,ax[1])

ax[0].set_title('Original')
ax[1].set_title('LUT Transformed')
fig

png

Export

Convenience wrappers to export to ONNX.
Other frameworks will be added soon.

ONNX
#hide_output
from fastai2_extensions.inference.export import *
torch_to_onnx(learn.model,
              activation   = nn.Softmax(-1),
              save_path    = Path.home()/'Desktop',
              model_fname  = 'onnx-model',
              input_shape  = (1,3,224,224),
              input_name   = 'input_image',
              output_names = 'output')
Loading, polishing, and optimising exported model from /Users/rahulsomani/Desktop/onnx-model.onnx
Exported successfully
path_onnx_model = '/Users/rahulsomani/Desktop/onnx-model.onnx'
fname = '../assets/imgs/odyssey-ape.png'
from onnxruntime import InferenceSession

session = InferenceSession(path_onnx_model)
x = {session.get_inputs()[0].name:
     torch_to_numpy(preprocess_one(fname))} # preprocessing - varies based on your training
session.run(None, x)
[array([[0.6942669 , 0.30573303]], dtype=float32)]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastai2_extensions-0.0.31.tar.gz (23.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fastai2_extensions-0.0.31-py3-none-any.whl (23.7 kB view details)

Uploaded Python 3

File details

Details for the file fastai2_extensions-0.0.31.tar.gz.

File metadata

  • Download URL: fastai2_extensions-0.0.31.tar.gz
  • Upload date:
  • Size: 23.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.0 requests/2.24.0 setuptools/50.3.0.post20201006 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.8.5

File hashes

Hashes for fastai2_extensions-0.0.31.tar.gz
Algorithm Hash digest
SHA256 fd2bdfb68805a5cb3fbcc3ec31bfd678cfc5b28898a1b1c2efe40d5b49728de7
MD5 1f14042965b5214dfcbd3e8592b4eafc
BLAKE2b-256 2fcc2b2b2150db7ddc20a861061ae446159f8907689a7af2b6b0ab275f856741

See more details on using hashes here.

File details

Details for the file fastai2_extensions-0.0.31-py3-none-any.whl.

File metadata

  • Download URL: fastai2_extensions-0.0.31-py3-none-any.whl
  • Upload date:
  • Size: 23.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.0 requests/2.24.0 setuptools/50.3.0.post20201006 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.8.5

File hashes

Hashes for fastai2_extensions-0.0.31-py3-none-any.whl
Algorithm Hash digest
SHA256 575023b280b185899e57af8b82d2d5964d314e47a9a88878c1a89eb7f1cf0b39
MD5 06e78f4cd6a6de4867dc8707c6d268f0
BLAKE2b-256 cd67a162fd8943d7e139a63f6f8c79a212b5cdc9b77caf2559eca63e31c5e9f4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page