Transformers at zeta scales
Project description
Build High-performance, agile, and scalable AI models with modular and re-useable building blocks!
Benefits
- Write less code
- Prototype faster
- Bleeding-Edge Performance
- Reuseable Building Blocks
- Reduce Errors
- Scalability
- Build Models faster
- Full Stack Error Handling
🤝 Schedule a 1-on-1 Session
Book a 1-on-1 Session with Kye, the Creator, to discuss any issues, provide feedback, or explore how we can improve Zeta for you.
Installation
pip install zetascale
Initiating Your Journey
Creating a model empowered with the aforementioned breakthrough research features is a breeze. Here's how to quickly materialize the renowned Flash Attention
import torch
from zeta.nn.attention import FlashAttention
q = torch.randn(2, 4, 6, 8)
k = torch.randn(2, 4, 10, 8)
v = torch.randn(2, 4, 10, 8)
attention = FlashAttention(causal=False, dropout=0.1, flash=True)
output = attention(q, k, v)
print(output.shape)
Documentation
Click here for the documentation, it's at zeta.apac.ai
Contributing
-
We need you to help us build the most re-useable, reliable, and high performance ML framework ever.
-
We need help writing tests and documentation!
License
- MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for zetascale-0.8.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | d550d83cbbd58fa08bfe0477d6081eac157492028c26b0565e2bd184b7a9f403 |
|
MD5 | 1116c713ccf35fc0eaf53b4044f5efd4 |
|
BLAKE2b-256 | e75e963c12303297102a4f2cdcd0a32cb189a7beb6c7413e0f48fd3e33109a3c |