Ask2Transformers is a library for zero-shot classification based on Transformers.
Project description
Ask2Transformers
A Framework for Textual Entailment based Zero Shot text classification
This repository contains the code for out of the box ready to use zero-shot classifiers among different tasks, such as Topic Labelling or Relation Extraction. It is built on top of 🤗 HuggingFace Transformers library, so you are free to choose among hundreds of models. You can either, use a dataset specific classifier or define one yourself with just labels descriptions or templates! The repository contains the code for the following publications:
- 📄 Ask2Transformers - Zero Shot Domain Labelling with Pretrained Transformers accepted in GWC2021.
- 📄 Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction accepted in EMNLP2021
- 📄 Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning accepted as Findings in NAACL2022
To get started with the repository consider reading the new documentation!
Installation
By using Pip (check the last release)
pip install a2t
Or by clonning the repository
git clone https://github.com/osainz59/Ask2Transformers.git
cd Ask2Transformers
python -m pip install .
Demo 🕹️
We have realeased a demo on Zero-Shot Information Extraction using Textual Entailment (ZS4IE: A toolkit for Zero-Shot Information Extraction with simple Verbalizations) accepted in the Demo Track of NAACL 2022. The code is publicly availabe on its own GitHub repository: ZS4IE.
Models
Available models
By default, roberta-large-mnli
checkpoint is used to perform the inference. You can try different models to perform the zero-shot classification, but they need to be finetuned on a NLI task and be compatible with the AutoModelForSequenceClassification
class from Transformers. For example:
roberta-large-mnli
joeddav/xlm-roberta-large-xnli
facebook/bart-large-mnli
microsoft/deberta-v2-xlarge-mnli
Coming soon: t5-large
like generative models support.
Pre-trained models 🆕
We now provide (task specific) pre-trained entailment models to: (1) reproduce the results of the papers and (2) reuse them for new schemas of the same tasks. The models are publicly available on the 🤗 HuggingFace Models Hub.
The model name describes the configuration used for training as follows:
HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]
pretrained_model
: The checkpoint used for initialization. For example: RoBERTalarge.NLI_datasets
: The NLI datasets used for pivot training.S
: Standford Natural Language Inference (SNLI) dataset.M
: Multi Natural Language Inference (MNLI) dataset.F
: Fever-nli dataset.A
: Adversarial Natural Language Inference (ANLI) dataset.
finetune_datasets
: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like HiTZ/A2T_RoBERTa_SMFA_ACE-arg
have been trained marking some information between square brackets ('[['
and ']]'
) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
Training your own models
There is no special script for fine-tuning your own entailment based models. In our experiments, we have used the publicly available run_glue.py python script (from HuggingFace Transformers). To train your own model, first, you will need to convert your actual dataset in some sort of NLI data, we recommend you to have a look to tacred2mnli.py script that serves as an example.
Tutorials (Notebooks)
Coming soon!
Results and evaluation
To obtain the results reported in the papers run the evaluation.py
script with the corresponding configuration files. A configuration file containing the task and evaluation information should look like this:
{
"name": "BabelDomains",
"task_name": "topic-classification",
"features_class": "a2t.tasks.text_classification.TopicClassificationFeatures",
"hypothesis_template": "The domain of the sentence is about {label}.",
"nli_models": [
"roberta-large-mnli"
],
"labels": [
"Animals",
"Art, architecture, and archaeology",
"Biology",
"Business, economics, and finance",
"Chemistry and mineralogy",
"Computing",
"Culture and society",
...
"Royalty and nobility",
"Sport and recreation",
"Textile and clothing",
"Transport and travel",
"Warfare and defense"
],
"preprocess_labels": true,
"dataset": "babeldomains",
"test_path": "data/babeldomains.domain.gloss.tsv",
"use_cuda": true,
"half": true
}
Consider reading the papers to access the results.
About legacy code
The old code of this repository has been moved to a2t.legacy
module and is only intended to be use for experimental reproducibility. Please, consider moving to the new code. If you need help read the new documentation or post an Issue on GitHub.
Citation
Cite this paper if you want to cite stuff related to Relation Extraction, etc.
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
pages = "1199--1212",
abstract = "Relation extraction systems require large amounts of labeled examples which are costly to annotate. In this work we reformulate relation extraction as an entailment task, with simple, hand-made, verbalizations of relations produced in less than 15 min per relation. The system relies on a pretrained textual entailment engine which is run as-is (no training examples, zero-shot) or further fine-tuned on labeled examples (few-shot or fully trained). In our experiments on TACRED we attain 63{\%} F1 zero-shot, 69{\%} with 16 examples per relation (17{\%} points better than the best supervised system on the same conditions), and only 4 points short to the state-of-the-art (which uses 20 times more training data). We also show that the performance can be improved significantly with larger entailment models, up to 12 points in zero-shot, allowing to report the best results to date on TACRED when fully trained. The analysis shows that our few-shot systems are specially effective when discriminating between relations, and that the performance difference in low data regimes comes mainly from identifying no-relation cases.",
}
Cite this paper if you want to cite stuff related with topic labelling (A2TDomains or our paper results).
@inproceedings{sainz-rigau-2021-ask2transformers,
title = "{A}sk2{T}ransformers: Zero-Shot Domain labelling with Pretrained Language Models",
author = "Sainz, Oscar and
Rigau, German",
booktitle = "Proceedings of the 11th Global Wordnet Conference",
month = jan,
year = "2021",
address = "University of South Africa (UNISA)",
publisher = "Global Wordnet Association",
url = "https://www.aclweb.org/anthology/2021.gwc-1.6",
pages = "44--52",
abstract = "In this paper we present a system that exploits different pre-trained Language Models for assigning domain labels to WordNet synsets without any kind of supervision. Furthermore, the system is not restricted to use a particular set of domain labels. We exploit the knowledge encoded within different off-the-shelf pre-trained Language Models and task formulations to infer the domain label of a particular WordNet definition. The proposed zero-shot system achieves a new state-of-the-art on the English dataset used in the evaluation.",
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file a2t-0.4.0.tar.gz
.
File metadata
- Download URL: a2t-0.4.0.tar.gz
- Upload date:
- Size: 59.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.10.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6defc7963758f03e2aa2417df3feb3797a77d1261cff6c55e78c8a9a5342ef26 |
|
MD5 | 68167ecc6be7e603527b4fe3e1d8eb99 |
|
BLAKE2b-256 | 95bdfacc586439bc8d7a7c9474ca1f278710e6fa3bde9c69db13455ee28089db |
File details
Details for the file a2t-0.4.0-py3-none-any.whl
.
File metadata
- Download URL: a2t-0.4.0-py3-none-any.whl
- Upload date:
- Size: 71.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.10.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 12c81303f8dc9da2c992ac71a75142f733525bf0f6a7f00182fd9f1a22b88b55 |
|
MD5 | 7ea22a7cf2ca9012a7bd80eab5106569 |
|
BLAKE2b-256 | f4796203da145e187f3bba284063dd8fe56ed5f2cab20a2ca39c38ebf7e09663 |