A PyTorch library of transformer models and components
Project description
Curated Transformers
State-of-the-art transformers, brick by brick
Curated Transformers is a transformer library for PyTorch. It provides state-of-the-art models that are composed from a set of reusable components. The stand-out features of Curated Transformer are:
- ⚡️ Supports state-of-the art transformer models, including LLMs such as Falcon, LLaMA, and Dolly v2.
- 👩🎨 Each model is composed from a set of reusable building blocks,
providing many benefits:
- Implementing a feature or bugfix benefits all models. For example,
all models support 4/8-bit inference through the
bitsandbytes
library and each model can use the PyTorchmeta
device to avoid unnecessary allocations and initialization. - Adding new models to the library is low-effort.
- Do you want to try a new transformer architecture? A BERT encoder with rotary embeddings? You can make it in a pinch.
- Implementing a feature or bugfix benefits all models. For example,
all models support 4/8-bit inference through the
- 💎 Consistent type annotations of all public APIs:
- Get great coding support from your IDE.
- Integrates well with your existing type-checked code.
- 🎓 Great for education, because the building blocks are easy to study.
- 📦 Minimal dependencies.
Curated Transformers has been production-tested by Explosion and will be used as the default transformer implementation in spaCy 3.7.
🧰 Supported Model Architectures
Supported encoder-only models:
- ALBERT
- BERT
- CamemBERT
- RoBERTa
- XLM-RoBERTa
Supported decoder-only models:
- GPT-NeoX
- LLaMA
- Falcon
Generator wrappers:
- Dolly v2
- Falcon
All types of models can be loaded from Huggingface Hub.
spaCy integration for curated transformers is provided by the
spacy-curated-transformers
package.
⚠️ Warning: Tech Preview
Curated Transformers 0.9.x is a tech preview, we will release Curated Transformers 1.0.0 with a stable API and semver guarantees over the coming weeks.
⏳ Install
pip install curated-transformers
🏃♀️ Usage Example
>>> from curated_transformers.generation import AutoGenerator, GreedyGeneratorConfig
>>> generator = AutoGenerator.from_hf_hub(name="tiiuae/falcon-7b-instruct", device="cuda:0")
>>> generator(["What is Python in one sentence?", "What is Rust in one sentence?"], GreedyGeneratorConfig())
['Python is a high-level programming language that is easy to learn and widely used for web development, data analysis, and automation.',
'Rust is a programming language that is designed to be a safe, concurrent, and efficient replacement for C++.']
You can find more usage examples
in the documentation. You can also find example programs that use Curated Transformers in the
examples
directory.
📚 Documentation
You can read more about how to use Curated Transformers here:
🗜️ Quantization
curated-transformers
supports dynamic 8-bit and 4-bit quantization of models by leveraging the bitsandbytes
library.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for curated-transformers-0.9.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | e95fb500f190261ded0a95cda6bbc7c295be5e6f86a1726578b2c83992ad6d33 |
|
MD5 | 662ea183cfd6343fa8cc9e89f2560dab |
|
BLAKE2b-256 | 3b3edead5bbc3e2cdb5d042ff4a837d4c53abaa6d5268d59d711212f4b92a74c |
Hashes for curated_transformers-0.9.1-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9d33821178ff7243e34f14cfc977611e9f3f71066a237753cddb4e52a1685463 |
|
MD5 | dfc1ce9896ef1fd331da6b0cb9998033 |
|
BLAKE2b-256 | ee3a0f8d7bc82d08ff54019ce14f97480eb25c7ff80d70cf33718dc709884d16 |