A PyTorch library of transformer models and components
Project description
🤖 Curated Transformers
State-of-the-art transformers, brick by brick
Curated Transformers is a transformer library for PyTorch. It provides state-of-the-art models that are composed from a set of reusable components. The stand-out features of Curated Transformer are:
- ⚡️ Supports state-of-the art transformer models, including LLMs such as Falcon, LLaMA, and Dolly v2.
- 👩🎨 Each model is composed from a set of reusable building blocks,
providing many benefits:
- Implementing a feature or bugfix benefits all models. For example,
all models support 4/8-bit inference through the
bitsandbytes
library and each model can use the PyTorchmeta
device to avoid unnecessary allocations and initialization. - Adding new models to the library is low-effort.
- Do you want to try a new transformer architecture? A BERT encoder with rotary embeddings? You can make it in a pinch.
- Implementing a feature or bugfix benefits all models. For example,
all models support 4/8-bit inference through the
- 💎 Consistent type annotations of all public APIs:
- Get great coding support from your IDE.
- Integrates well with your existing type-checked code.
- 🎓 Great for education, because the building blocks are easy to study.
- 📦 Minimal dependencies.
Curated Transformers has been production-tested by Explosion and will be used as the default transformer implementation in spaCy 3.7.
🧰 Supported Model Architectures
Supported encoder-only models:
- ALBERT
- BERT
- CamemBERT
- RoBERTa
- XLM-RoBERTa
Supported decoder-only models:
- GPT-NeoX
- LLaMA
- Falcon
Generator wrappers:
- Dolly v2
- Falcon
All types of models can be loaded from Huggingface Hub.
spaCy integration for curated transformers is provided by the
spacy-curated-transformers
package.
⚠️ Warning: Tech Preview
Curated Transformers 0.9.x is a tech preview, we will release Curated Transformers 1.0.0 with a stable API and semver guarantees over the coming weeks.
⏳ Install
pip install curated-transformers
🏃♀️ Usage Example
>>> from curated_transformers.generation import AutoGenerator, GreedyGeneratorConfig
>>> generator = AutoGenerator.from_hf_hub(name="tiiuae/falcon-7b-instruct", device="cuda:0")
>>> generator(["What is Python in one sentence?", "What is Rust in one sentence?"], GreedyGeneratorConfig())
['Python is a high-level programming language that is easy to learn and widely used for web development, data analysis, and automation.',
'Rust is a programming language that is designed to be a safe, concurrent, and efficient replacement for C++.']
You can find more usage examples
in the documentation. You can also find example programs that use Curated Transformers in the
examples
directory.
📚 Documentation
You can read more about how to use Curated Transformers here:
🗜️ Quantization
curated-transformers
supports dynamic 8-bit and 4-bit quantization of models by leveraging the bitsandbytes
library.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for curated-transformers-0.9.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0fc67c13cb204eb3970be1c1e59d9678005e83884cb77a4c442f1d64a3c990a3 |
|
MD5 | 82768b123ea611804dbc5d460c3659d1 |
|
BLAKE2b-256 | df22f774e0539f7567d7c15e8d39a7875fd06c3cfcd60eb1a1f0c8a4e21caa74 |
Hashes for curated_transformers-0.9.0-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 44e16ef5e215d80c5d86dc7c624bc9255489395c8dff1c13ad5631c08ffb70bc |
|
MD5 | 998b40ccb301cbdb96812fbc1eade80b |
|
BLAKE2b-256 | c4641ad7eb59fe58b16fdb64343c979f5a3f65900fd7e084446a19ce93b4e418 |