Transformers at zeta scales
Project description
Build High-performance, agile, and scalable AI models with modular and re-useable building blocks!
Vision
Zeta hopes to be the leading framework and library to effortlessly enable you to create the most capable and reliable foundation models out there with infinite scalability in as minmal amounts of code as possible
🤝 Schedule a 1-on-1 Session
Book a 1-on-1 Session with Kye, the Creator, to discuss any issues, provide feedback, or explore how we can improve Zeta for you.
Installation
pip install zetascale
Initiating Your Journey
Creating a model empowered with the aforementioned breakthrough research features is a breeze. Here's how to quickly materialize the renowned Flash Attention
import torch
from zeta.nn.attention import FlashAttention
q = torch.randn(2, 4, 6, 8)
k = torch.randn(2, 4, 10, 8)
v = torch.randn(2, 4, 10, 8)
attention = FlashAttention(causal=False, dropout=0.1, flash=True)
output = attention(q, k, v)
print(output.shape)
Documentation
Click here for the documentation, it's at zeta.apac.ai
Contributing
We're dependent on you for contributions, it's only Kye maintaining this repository and it's very difficult and with that said any contribution is infinitely appreciated by not just me but by Zeta's users who dependen on this repository to build the world's best AI models. Head over to the project board to look at open features to implement or bugs to tackle!
Project Board
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for zetascale-0.8.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | fdc9837439a71e7245bdd99ba33ec6277772d49f8423d965c9a39b986ddf8448 |
|
MD5 | 38a0d6ecb680cfdbdfff6a950f6039de |
|
BLAKE2b-256 | b326f44161d18091024d666c135a69108cc205245c828a2848e15238a67ad0a7 |