CNN Attention layer to be used with tf or tf.keras
Project description
Visual_attention_tf
A set of image attention layers implemented as custom keras layers that can be imported dirctly into keras
Currently Implemented layers:
- Pixel Attention : Efficient Image Super-Resolution Using Pixel Attention(Hengyuan Zhao et al)
- Channel Attention : CBAM: Convolutional Block Attention Module(Sanghyun Woo et al)
Installation
You can see the projects official pypi page : https://pypi.org/project/visual-attention-tf/
pip install visual-attention-tf
Use --no-dependencies if you have tensorflow-gpu installed already
Usage:
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Conv2D
from visual_attention import PixelAttention2D , ChannelAttention2D
inp = Input(shape=(1920,1080,3))
cnn_layer = Conv2D(32,3,,activation='relu', padding='same')(inp)
# Using the .shape[-1] to simplify network modifications. Can directly input number of channels as well
Pixel_attention_cnn = PixelAttention2D(cnn_layer.shape[-1])(cnn_layer)
Channel_attention_cnn = ChannelAttention2D(cnn_layer.shape[-1])(cnn_layer)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for visual-attention-tf-1.1.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8fb0ccdb793f22d24ce06f844aa84d89a25e10ce1f11963f3dd0ee43493e96d8 |
|
MD5 | 80c2d517b348a940fca06315671220d5 |
|
BLAKE2b-256 | feccf698555ed263470c497711168be17568916ec9c274e6df3496e27addd7a0 |
Close
Hashes for visual_attention_tf-1.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 783a90a1c07255a4f843cd717bbbcce59899364254a442d91e48dc1b57e8f2ad |
|
MD5 | 09bab381f7f704510be2bbe278084d7b |
|
BLAKE2b-256 | 79daeddf4391344a368d7158164901eacd33dc7b2a58dc8bd36349fa78f4fa9d |