A Graph Attention Framework for extracting Graph Attention embeddings and implementing Multihead Graph Attention Networks
Project description
This package is used for extracting Graph Attention Embeddings and provides a framework for a Tensorflow Graph Attention Layer which can be used for knowledge graph /node base semantic tasks. It determines the pair wise embedding matrix for a higher order node representation and concatenates them with an attention weight. It then passes it through a leakyrelu activation for importance sampling and damps out negative effect of a node.It then applies a softmax layer for normalization of the attention results and determines the final output scores.The GraphAttentionBase.py script implements a Tensorflow/Keras Layer for the GAT which can be used and the GraphMultiheadAttention.py is used to extract GAT embeddings.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Hashes for GraphAttentionNetworks-0.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | fc349b7e036c984523a3dc77c4222c0287825e9aadf5b8dce5279cfac336dc7b |
|
MD5 | 868dd3850c5a846acc5c0e5ff49515c7 |
|
BLAKE2b-256 | eceb1fa7998ac6d0cc21c555771a9acf1ea2f1db06fcb6b351d8b9be2585f404 |