A text generation model combining multiple neural network architectures
Project description
SENTIA
SENTIA is a PyTorch implementation of a text generation model combining multiple neural network architectures like GRUs, Transformers, MHAs and MEPA.
Installation
pip install sentia
Usage
import torch
from sentia import SENTIA
# Create model
model = SENTIA(vocab_size=10000, embedding_dim=512, num_heads=8, num_layers=6, hidden_dim=512)
# Forward pass
input_ids = torch.randint(0, 10000, (1,32))
outputs = model(input_ids)
# Generate text
generated = model.generate(input_ids, max_length=128)
Model Architecture
The SENTIA model consists of the following components:
- Embedding layer
- GRU layer
- MEPA (Mutation Enhanced Plasticity Architecture) layers
- Transformer decoder layers
- Multi-head attention layer
- Output head layers These components are combined to leverage the strengths of multiple architectures for improved text generation capabilities.
Training
The fit() method can bne used to train the model on a dataset. It handles the training loop, gradient accumulation, and RL calculations. Currently the scheduler parameter only supports StepLR
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
sentia-1.17.tar.gz
(40.9 kB
view hashes)
Built Distribution
sentia-1.17-py3-none-any.whl
(44.3 kB
view hashes)