Direct Attentive Dependency Parser
Project description
DiaParser
DiaParser
is a state-of-the-art dependency parser, that extends the architecture of the Biaffine Parser (Dozat and Manning, 2017) by exploiting both embeddings and attentions provided by transformers.
By exploiting the rich hidden linguistic information in contextual embeddings from transformers, DiaParser
can avoid using intermediate annotations like POS, lemma and morphology, used in traditional parsers.
Therefore the only stages in the parsing pipeline are tokenization and parsing.
The parser may also work directly on plain text. The parser automatically dowloads pretrained models as well as tokenizers and produces dependency parsing trees, as detailed in Usage.
Exploiting attentions from transformer heads provides improvements in accuracy, without resorting to fine tuning or training its own attention. Overall, this simplifies the architecture and lowers the cost of resources needed during training, especially memory, and allows the parser to improve as new versions of transformers become available. The parser uses the HuggingFace Transformers API and in particular the generic AutoClasses interface to access the transformer models avaiable.
We plan to track the improvements in transformer technology and to realease models of the parser that incorporate these models. Currently we provide pretrained models for 18 languages.
We encourage anyone to contribute trained models for other combinatiobns of language/transformer pairs, that we will publish. We will soon provide a web form where to upload new models.
You can also train your own models and contribute them to the repository, to share with others.
DiaParser
uses pretrained contextual embeddings for representing input from models in transformers
.
Pretrained tokenizers are provided by Stanza.
Alternatively to contextual embeddings, DiaParser
also allows to utilize CharLSTM layers to produce character/subword-level features.
Both BERT and CharLSTM avoid the need of generating POS tags.
DiaParser
is derived from SuPar
, which provides additional variants of dependency and constituency parsers.
Contents
Installation
DiaParser
can be installed via pip
:
$ pip install -U diaparser
Installing from sources is also possible:
$ git clone https://github.com/Unipisa/diaparser && cd diaparser
$ python setup.py install
The package has the following requirements:
python
: >= 3.6pytorch
: >= 1.4transformers
: >= 3.1- optional tokenizers
stanza
: >= 1.1.1
Performance
DiaParser
provides pretrained models for English, Chinese and other 21 languages from the Universal Dependencies treebanks v2.6.
English models are trained on the Penn Treebank (PTB) with Stanford Dependencies, with 39,832 training sentences, while Chinese models are trained on Penn Chinese Treebank version 7 (CTB7) with 46,572 training sentences.
The accuracy and parsing speed of these models are listed in the following tables. The first table shows the result of parsing starting from gold tokenized text. Notably, punctuation is ignored in the evaluation metrics for PTB, but included in all the others. The numbers in bold represent state-of-the-art values.
Language | Corpus | Name | UAS | LAS | Speed (Sents/s) |
---|---|---|---|---|---|
English | PTB | en_ptb.electra |
96.03 | 94.37 | 352 |
Chinese | CTB | zh_ptb.hfl |
92.14 | 85.74 | 319 |
Catalan | AnCora | ca_ancora.mbert |
95.55 | 93.78 | 249 |
German | HDT | de_htd.dbmdz-bert-base |
97.97 | 96.97 | 184 |
Japanese | GSD | ja_gsd.mbert |
95.41 | 93.98 | 397 |
Latin | ITTB, LLCT | la_ittb_llct.mbert |
94.03 | 91.70 | 139 |
Norwegian | Nynorsk | no_nynorsk.mbert |
92.50 | 90.13 | 185 |
Romanian | RRT | ro_rrt.mbert |
93.03 | 87.18 | 286 |
Spanish | AnCora | es_ancora.mbert |
96.03 | 94.37 | 352 |
Turkish | Boun | tr_boun.electra-base |
83.53 | 75.87 | 1198 |
Below are the results on the dataset of the IWPT 2020 Shared Task on Enhanced Dependencies, where the tokenization was done by the parser itself:
Language | Corpus | Name | UAS | LAS | Speed (Sents/s) |
---|---|---|---|---|---|
Arabic | PADT | ar_padt.bert |
87.75 | 83.25 | 99 |
Bulgarian | BTB | bg_btb.DeepPavlov |
95.02 | 92.20 | 479 |
Czech | PDT, CAC, FicTree | cs_pdt_cac_fictree.DeepPavlov |
94.02 | 92.06 | 403 |
English | EWT | en_ewt.electra |
91.66 | 89.51 | 397 |
Estonian | EDT, EWT | et_edt_ewt.mbert |
86.39 | 82.44 | 247 |
Finnish | TDT | fi_tdt.turkunlp |
94.28 | 92.56 | 364 |
French | sequoia | fr_sequoia.camembert |
92.81 | 89.55 | 200 |
German | HDT | de_hdt.dbmdz-bert-base |
97.97 | 96.97 | 381 |
Italian | ISDT | it_isdt.dbmdz-electra-xxl |
95.48 | 94.16 | 379 |
Latvian | LVBT | lv_lvtb.mbert |
87.46 | 83.51 | 290 |
Lithuanian | ALKSNIS | lt_alksnis.mbert |
80.09 | 75.14 | 290 |
Dutch | Alpino, LassySmall | nl_alpino_lassysmall.wietsedv |
90.80 | 88.34 | 367 |
Polish | PDB, LFG | pl_pdb_lfg.dkleczek |
94.38 | 91.70 | 563 |
Russian | SynTagRus | ru_syntagrus.DeepPavlov |
94.97 | 93.72 | 445 |
Slovak | SNK | sk_snk.mbert |
93.11 | 90.44 | 381 |
Swediskh | Talbanken | sv_talbanken.KB |
90.79 | 88.08 | 491 |
Tamil | TTB | ta_ttb.mbert |
74.20 | 66.49 | 175 |
Ukrainian | IU | uk_iu.TurkuNLP |
90.39 | 87.61 | 301 |
These results were obtained on a server with Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz and Nvidia T4 GPU.
Usage
DiaParser
is simple to use: you can just download a pretrained model and run syntactic parsing over sentences with a few lines of code:
>>> from diaparser.parsers import Parser
>>> parser = Parser.load('en_ewt-electra')
>>> dataset = parser.predict([['She', 'enjoys', 'playing', 'tennis', '.']], prob=True)
The call to parser.predict
will return an instance of diaparser.utils.Dataset
containing the predicted syntactic trees.
You can access each sentence within the dataset
:
>>> print(dataset.sentences[0])
1 She _ _ _ _ 2 nsubj _ _
2 enjoys _ _ _ _ 0 root _ _
3 playing _ _ _ _ 2 xcomp _ _
4 tennis _ _ _ _ 3 dobj _ _
5 . _ _ _ _ 2 punct _ _
To parse plain text just requires specifying the language code:
>>> dataset = parser.predict('She enjoys playing tennis.', text='en')
An SVG picture illusrating the parse tree can be produced with:
>>> sent = dataset.sentences[0]
>>> displacy.render(sent.to_displacy(), style='dep', manual=True, options={'compact': True, 'distance': 120})
The input can be provided in a file in CoNLL-U format.
Further examples of how to use the parser and experiment with it can be found in this notebook.
Training
To train a model from scratch, it is preferred to use the command-line option, which is more flexible and customizable. Here are some training examples:
# Biaffine Dependency Parser
# some common and default arguments are stored in config.ini
$ python -m diaparser.cmds.biaffine_dependency train -b -d 0 \
-c config.ini \
-p exp/en_ptb.char/model \
-f char
# to use BERT, `-f` and `--bert` (default to bert-base-cased) should be specified
$ python -m diaparser.cmds.biaffine_dependency train -b -d 0 \
-p exp/en_ptb.bert-base/model \
-f bert \
--bert bert-base-cased
Warning. There is currently a limit of 500 to the length of tokenized sentences, due to the maximum size of embeddings in most pretrained trnsformer models.
For further instructions on training, please type python -m diaparser.cmds.<parser> train -h
.
Alternatively, DiaParser
provides an equivalent command entry points registered in setup.py
:
diaparser
.
$ diaparser train -b -d 0 -c config.ini -p exp/en_ptb.electra-base/model -f bert --bert google/electra-base-discriminator
For handling large models, distributed training is also supported:
$ python -m torch.distributed.launch --nproc_per_node=4 --master_port=10000 \
-m parser.cmds.biaffine_dependency train -b -d 0,1,2,3 \
-p exp/en_ptb.electra-base/model \
-f bert --bert google/electra-base-discriminator
You may consult the PyTorch documentation and tutorials for more details.
Evaluation
The evaluation process resembles prediction:
>>> parser = Parser.load('biaffine-dep-en')
>>> loss, metric = parser.evaluate('data/ptb/test.conllx')
2020-07-25 20:59:17 INFO Loading the data
2020-07-25 20:59:19 INFO
Dataset(n_sentences=2416, n_batches=11, n_buckets=8)
2020-07-25 20:59:19 INFO Evaluating the dataset
2020-07-25 20:59:20 INFO loss: 0.2326 - UCM: 61.34% LCM: 50.21% UAS: 96.03% LAS: 94.37%
2020-07-25 20:59:20 INFO 0:00:01.253601s elapsed, 1927.25 Sents/s
TODO
- Provide a repository where to upload models, like HuggingFace.
References
- Giuseppe Attardi, Daniele Sartiano, Yu Zhang. 2021. DiaParser attentive dependency parser. Submitted for publication.
- Timothy Dozat and Christopher D. Manning. 2017. Deep Biaffine Attention for Neural Dependency Parsing.
- Xinyu Wang, Jingxian Huang, and Kewei Tu. 2019. Second-Order Semantic Dependency Parsing with End-to-End Neural Networks.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file diaparser-1.1.3.tar.gz
.
File metadata
- Download URL: diaparser-1.1.3.tar.gz
- Upload date:
- Size: 62.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a8ca0262110c9079d0fbf9cdb254cf90910685c4798529faf4bc4f3033bc54e3 |
|
MD5 | 3a922de006b680c5059fe023be34dbd8 |
|
BLAKE2b-256 | 2654bc02986fa43c88eeb0f2b527a5d66a322cab6a8c5ebd07bb37961bed565c |
File details
Details for the file diaparser-1.1.3-py3-none-any.whl
.
File metadata
- Download URL: diaparser-1.1.3-py3-none-any.whl
- Upload date:
- Size: 69.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 554b11adc9cb4d911b6923e40296621b4f529585e4a000efde4e2a6d5f75fed6 |
|
MD5 | e415927975da26bcf73dd8cd51c1c77f |
|
BLAKE2b-256 | d6f3df38e0ced703d8febdf9824d1f483ba244c9507d4ffb279f58964e697409 |