CNN Attention layer to be used with tf or tf.keras
Project description
Visual_attention_tf
A set of image attention layers implemented as custom keras layers that can be imported dirctly into keras
Currently Implemented layers:
- Pixel Attention : Efficient Image Super-Resolution Using Pixel Attention(Hengyuan Zhao et al)
- Channel Attention : CBAM: Convolutional Block Attention Module(Sanghyun Woo et al)
Installation
You can see the projects official pypi page : https://pypi.org/project/visual-attention-tf/
pip install visual-attention-tf
Usage:
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Conv2D
from visual_attention import PixelAttention2D , ChannelAttention2D
inp = Input(shape=(1920,1080,3))
cnn_layer = Conv2D(32,3,,activation='relu', padding='same')(inp)
# Using the .shape[-1] to simplify network modifications. Can directly input number of channels as well
Pixel_attention_cnn = PixelAttention2D(cnn_layer.shape[-1])(cnn_layer)
Channel_attention_cnn = ChannelAttention2D(cnn_layer.shape[-1])(cnn_layer)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for visual-attention-tf-1.0.4.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 09ca099e8f9541ae5cd0dd0c49aa01e2f7e5343e31a077dad73aae99850448ad |
|
MD5 | 8d67ce9cac2e692418230e79633d7ac6 |
|
BLAKE2b-256 | f6adf7366017603e0b5514c6b20f3ae8c82912a455b2ecd7c9f052d6874585e1 |
Close
Hashes for visual_attention_tf-1.0.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 90573eb69e2c289065d601ba1d4259032a4205d6f8d752e62a7e68a47aef7033 |
|
MD5 | fdb49aada7860a12d1261dea3514b000 |
|
BLAKE2b-256 | 1559f8f759e214b6b0dc50a52ee49bd41220243bcade809d72c75ce2f734710e |