Skip to main content

Attention to Key Area, a plug and play interpretable network.

Project description

A2KA

A2KA is a novel web architecture designed to identify crucial areas by extracting biological information from the embedding space of large language models. Make sure pytorch is installed firstly. The github storage is: https://github.com/Dsadd4/NLSExplorer_1.0

Installation

You can install A2KA via pip:

pip install A2KA

Usage

A2KA

from A2KA import A2KA
import torch
hidden_dimention = 512
#configure your A2KA sturcture
config = [8,8,32]
#If your datasize is significant large, extending the scale of the network may be a good choice.
#Such a config = 18*[64] means it has 18 layers and each layer has 64 basic attention units.
model =A2KA( hidden_dimention,config)
# tensor in a shape of (Batchsize,sequence_length, embedding dimension)
exampletensor = torch.randn(5,100,512)
prediction,layerattention = model(exampletensor)
print(prediction)
print(layerattention)

SCNLS (in linux system)

from A2KA import SCNLS
#Example 
sequence_for_analysis = ['MSSAKRRKK','LSSSSKVR','MTNLP']
kth_set = 3
max_gap = 3
processorsnumber = 2
result = SCNLS(sequence_for_analysis,kth_set,max_gap,processorsnumber)
print(result)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

A2KA-0.1.7.tar.gz (7.2 kB view hashes)

Uploaded Source

Built Distribution

A2KA-0.1.7-py3-none-any.whl (7.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page