A Unified Library for Parameter-Efficient and Modular Transfer Learning
Project description
Note: This repository holds the codebase of the Adapters library, which has replaced
adapter-transformers
. For the legacy codebase, go to: https://github.com/adapter-hub/adapter-transformers-legacy.
Adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
adapters
is an add-on to HuggingFace's Transformers library, integrating adapters into state-of-the-art language models by incorporating AdapterHub, a central repository for pre-trained adapter modules.
Installation
adapters
currently supports Python 3.8+ and PyTorch 1.10+.
After installing PyTorch, you can install adapters
from PyPI ...
pip install -U adapters
... or from source by cloning the repository:
git clone https://github.com/adapter-hub/adapters.git
git checkout adapters
cd adapters
pip install .
Quick Tour
Load pre-trained adapters:
from adapters import AutoAdapterModel
from transformers import AutoTokenizer
model = AutoAdapterModel.from_pretrained("roberta-base")
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model.load_adapter("AdapterHub/roberta-base-pf-imdb", source="hf", set_active=True)
print(model(**tokenizer("This works great!", return_tensors="pt")).logits)
Adapt existing model setups:
import adapters
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("t5-base")
adapters.init(model)
model.add_adapter("my_lora_adapter", config="lora")
model.train_adapter("my_lora_adapter")
# Your regular training loop...
Flexibly configure adapters:
from adapters import ConfigUnion, PrefixTuningConfig, ParBnConfig, AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/deberta-v3-base")
adapter_config = ConfigUnion(
PrefixTuningConfig(prefix_length=20),
ParBnConfig(reduction_factor=4),
)
model.add_adapter("my_adapter", config=adapter_config, set_active=True)
Easily compose adapters in a single model:
from adapters import AdapterSetup, AutoAdapterModel
import adapters.composition as ac
model = AutoAdapterModel.from_pretrained("roberta-base")
qc = model.load_adapter("AdapterHub/roberta-base-pf-trec")
sent = model.load_adapter("AdapterHub/roberta-base-pf-imdb")
with AdapterSetup(ac.Parallel(qc, sent)):
print(model(**tokenizer("What is AdapterHub?", return_tensors="pt")))
Useful Resources
HuggingFace's great documentation on getting started with Transformers can be found here. adapters
is fully compatible with Transformers.
To get started with adapters, refer to these locations:
- Colab notebook tutorials, a series notebooks providing an introduction to all the main concepts of (adapter-)transformers and AdapterHub
- https://docs.adapterhub.ml, our documentation on training and using adapters with adapters
- https://adapterhub.ml to explore available pre-trained adapter modules and share your own adapters
- Examples folder of this repository containing HuggingFace's example training scripts, many adapted for training adapters
Implemented Methods
Currently, adapters integrates all architectures and methods listed below:
Method | Paper(s) | Quick Links |
---|---|---|
Bottleneck adapters | Houlsby et al. (2019) Bapna and Firat (2019) |
Quickstart, Notebook |
AdapterFusion | Pfeiffer et al. (2021) | Docs: Training, Notebook |
MAD-X, Invertible adapters |
Pfeiffer et al. (2020) | Notebook |
AdapterDrop | Rücklé et al. (2021) | Notebook |
MAD-X 2.0, Embedding training |
Pfeiffer et al. (2021) | Docs: Embeddings, Notebook |
Prefix Tuning | Li and Liang (2021) | Docs |
Parallel adapters, Mix-and-Match adapters |
He et al. (2021) | Docs |
Compacter | Mahabadi et al. (2021) | Docs |
LoRA | Hu et al. (2021) | Docs |
(IA)^3 | Liu et al. (2022) | Docs |
UniPELT | Mao et al. (2022) | Docs |
Supported Models
We currently support the PyTorch versions of all models listed on the Model Overview page in our documentation.
Developing & Contributing
To get started with developing on Adapters yourself and learn more about ways to contribute, please see https://docs.adapterhub.ml/contributing.html.
Citation
If you use this library for your work, please consider citing our paper AdapterHub: A Framework for Adapting Transformers:
@inproceedings{pfeiffer2020AdapterHub,
title={AdapterHub: A Framework for Adapting Transformers},
author={Pfeiffer, Jonas and
R{\"u}ckl{\'e}, Andreas and
Poth, Clifton and
Kamath, Aishwarya and
Vuli{\'c}, Ivan and
Ruder, Sebastian and
Cho, Kyunghyun and
Gurevych, Iryna},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
pages={46--54},
year={2020}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file adapters-0.0.0.dev20231123.tar.gz
.
File metadata
- Download URL: adapters-0.0.0.dev20231123.tar.gz
- Upload date:
- Size: 177.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 89ad02245dd41e710ec0f8a40ab1522202f14f60d73b026d3008927135aaeb70 |
|
MD5 | da96503e9c1804daf36d18bec015262d |
|
BLAKE2b-256 | 8c02d8029f60cdbcc0e03a939b5bf8ea9a161701eb58ed3a801f27cba08af23d |
File details
Details for the file adapters-0.0.0.dev20231123-py3-none-any.whl
.
File metadata
- Download URL: adapters-0.0.0.dev20231123-py3-none-any.whl
- Upload date:
- Size: 229.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 679b94414a12332f019d00917ebf109721258e0297d6395fae17a24f4edec8f0 |
|
MD5 | 46eab0def017efa36ff59212e7fdce36 |
|
BLAKE2b-256 | 6d44ae02a587a55463fb7fa60b9f3d8cfb102be6bd1cb44625f912e995dca1bb |