Package implementing different attention mechanisms as tf.keras layers
Project description
Kattention
This package implements different Attention mechanisms as Keras layers.
Setup
pip install kattention
Usage
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Softmax
from kattention.layers import Transformer
SEQUENCE_LENGTH = 4
EMBEDDING_SIZE = 300
CLASSES_TO_PREDICT = 5
ATT_HEADS = 2
model = Sequential()
model.add(Transformer(attention_heads=ATT_HEADS, input_shape=(SEQUENCE_LENGTH, EMBEDDING_SIZE)))
model.add(Transformer(attention_heads=ATT_HEADS))
model.add(Flatten())
model.add(Dense(CLASSES_TO_PREDICT))
model.add(Softmax())
print(model.summary())
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
kattention-0.1.1.tar.gz
(2.5 kB
view hashes)
Built Distribution
Close
Hashes for kattention-0.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7e9399721cbce6fbd531c47231c7fb8aad6f27eea9b9bd7fda81980d9cf9436e |
|
MD5 | 355cec8e78bd250e64c178b73a93f7f0 |
|
BLAKE2b-256 | deea9179674edac078787d556fb619757c998250396e308d981ee177a2904719 |