Transformers at zeta scales
Project description
Build High-performance, agile, and scalable AI models with modular and re-useable building blocks!
Benefits
- Write less code
- Prototype faster
- Bleeding-Edge Performance
- Reuseable Building Blocks
- Reduce Errors
- Scalability
- Build Models faster
- Full Stack Error Handling
🤝 Schedule a 1-on-1 Session
Book a 1-on-1 Session with Kye, the Creator, to discuss any issues, provide feedback, or explore how we can improve Zeta for you.
Installation
pip install zetascale
Initiating Your Journey
Creating a model empowered with the aforementioned breakthrough research features is a breeze. Here's how to quickly materialize the renowned Flash Attention
import torch
from zeta.nn.attention import FlashAttention
q = torch.randn(2, 4, 6, 8)
k = torch.randn(2, 4, 10, 8)
v = torch.randn(2, 4, 10, 8)
attention = FlashAttention(causal=False, dropout=0.1, flash=True)
output = attention(q, k, v)
print(output.shape)
Documentation
Click here for the documentation, it's at zeta.apac.ai
Contributing
We're dependent on you for contributions, it's only Kye maintaining this repository and it's very difficult and with that said any contribution is infinitely appreciated by not just me but by Zeta's users who dependen on this repository to build the world's best AI models. Head over to the project board to look at open features to implement or bugs to tackle!
Project Board
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for zetascale-0.8.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ca24a2232f131eda2361c30b34044f73f7dab8ea799769c31303f22bde516a6e |
|
MD5 | 254f221e2544b7748cead123155364e4 |
|
BLAKE2b-256 | 9913c27c2d407e346354b497547bc9ee6d34b65193219de9de0c150ab3f79436 |