Official library for the energy-efficient Focused Convolution
Project description
focusedconv
Official library for the Focused Convolution, developed for public use by the High Efficiency, Low-Power Systems (HELPS) lab at Purdue University.
Use this library at your own risk. HELPS and Purdue University are not liable for any damages or consequences resulting from the use or modification of the library.
What's a focused convolution?
Normal CNNs operate convolutions on the entirety of the input image. However, many input images have many pixels that are not very interesting (e.g. background pixels). This means normal CNNs are wasting time and energy on those uninteresting pixels.
The Focused Convolution is designed to ignore any pixels that are outside the Area of Interest, focusing only on interesting pixels. The weights and biases can be kept from the original CNN, allowing you to achieve the same accuracy while saving on energy and inference time.
What's an fCNN?
By supplying an activation brightness threshold, you can filter the features from the early layers in your CNN, and then use that as an AoI for your later layers! You can replace the later layers with focused convolutions but keep the weights and biases the same. Thus, you avoid retraining, and pick up some energy and speed improvements! This modifed model is an "fCNN" - it maintains accuracy on par with the original CNN, but is faster on images with uninteresting pixels.
Usage
Focused Convolutions are designed as drop-in replacements for the standard torch.nn.Conv2d
module from PyTorch.
You can use convenience functions from focusedconv
to quickly convert CNNs into "fCNNs" - models that share the same weights and biases but can ignore pixels from outside an Area of Interest (AoI). Examples are shown below.
Basic Example (fCNN)
First, choose an activation brightness threshold as described in our paper as BRIGHTNESS_THRESHOLD
.
Next, choose the layers from your CNN that will be filtered by your threshold as EARLY_LAYERS
.
Finally, use the focusedconv.build_fcnn()
function to generate an fCNN that applies the threshold to the output of your early layers, and then uses the resulting Area of Interest (AoI) with compute-efficient focused convolutions later on.
import copy
import focusedconv
from torchvision.models import vgg16, VGG16_Weights
original_vgg = vgg16(weights=VGG16_Weights.IMAGENET1K_V1).eval()
print("ORIGINAL VGG:", original_vgg)
NUM_TOP_LAYERS = 4 # Let's say you want to use the first 4 layers of VGG as EARLY_LAYERS
BRIGHTNESS_THRESHOLD = 90 # Let's say your activation brightness threshold is 90
EARLY_LAYERS = copy.deepcopy(original_vgg.features[0: NUM_TOP_LAYERS])
vgg16_remaining_layers = copy.deepcopy(original_vgg.features[NUM_TOP_LAYERS: ])
focused_vgg = copy.deepcopy(original_vgg)
# Produces fVGG with focused convolutions, allowing you to get similar accuracy as the original VGG,
# but with efficient focused convolutions for faster inference speed and reduced energy use
# Also generates an AoI object for you to use
focused_vgg, aoi = focusedconv.build_focused_model(focused_vgg, vgg16_top_layers, ACTIVATION_BRIGHTNESS_THRESHOLD, vgg16_remaining_layers)
print("fVGG with FOCUSED CONVOLUTIONS:", focused_vgg)
Advanced Example (manual focused convolution usage)
If you want more finetuned control when using focused convolutions, you can specify your own AoI mask yourself before each inference.
The focusedconv.focusify_all_conv2d()
function is designed to recursively run in place on a PyTorch nn.Module
, replacing all Conv2d
layers with FocusedConv2d
layers.
Each layer points to your specified AoI, allowing you to change the AoI whenever you like; the focused convolutions will adjust their behavior accordingly.
import focusedconv
import copy
import torch
import torch.nn as nn
# Let's say you have this simple CNN that does 1x1 convolutions on
# a 3-channel RGB image and produces a 1-channel grayscale image
cnn = nn.Sequential(
nn.Conv2d(3, 1, (1,1)),
nn.ReLU(),
)
print("Original CNN:", cnn)
# Make an area of interest (AoI) mask that only keeps the top left 25% of the image:
aoi_mask = torch.as_tensor([
[1, 0],
[0, 0]
])
# Initialize an AoI object with this mask
aoi = focusedconv.areaofinterest.AoI(aoi_mask)
# Produce a model that shares the weights and biases of the CNN
# but uses Focused Convolutions instead with the specified AoI mask
focused_cnn = copy.deepcopy(cnn)
focusedconv.focusify_all_conv2d(focused_cnn, aoi)
print("Focused CNN:", focused_cnn)
# Your input image
input_img = torch.rand([1, 3, 4, 4])
# AoI size is calculated as 25% of the image
print("AoI size is:", aoi.calculate_aoi_size())
aoi.imshow() # Can be used to display the AoI
# Observe that the focused CNN only provides output in the
# top left corner of the output
print("Output of original CNN:\n", cnn(input_img))
print("Output of focused CNN:\n", focused_cnn(input_img))
# Change the AoI so that it now includes the entire top half of the image:
aoi.mask_tensor[0, 1] = 1
# Observe that the focused CNN's output now provides the entire top half
print("New AoI size is:", aoi.calculate_aoi_size())
print("Now, output of focused CNN:\n", focused_cnn(input_img))
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file focusedconv-1.0.1.tar.gz
.
File metadata
- Download URL: focusedconv-1.0.1.tar.gz
- Upload date:
- Size: 682.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8a3f4bcb001b584266a956ee52c9f969e2b8268b4aadfe504afad9d99f60781b |
|
MD5 | 81cee85367ef9b79262d4214ffc5cdf1 |
|
BLAKE2b-256 | 3eb78311a6310e7fa75f21a2acb425f03ab0787343a69c43ecb957408ce7bd2b |
File details
Details for the file focusedconv-1.0.1-py3-none-any.whl
.
File metadata
- Download URL: focusedconv-1.0.1-py3-none-any.whl
- Upload date:
- Size: 12.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b04fb0638d9848105f86264b83f7de2c1bf556cd659c330071fdc5ad38b705dc |
|
MD5 | 307f783a4f33e2de1a895ae7eb5dcd6e |
|
BLAKE2b-256 | dc06f3559b5521115dde7f78ee5c619a5051462a2054d35808180a3f605203e3 |