Skip to main content

Morphological parser (POS, lemmata, NER etc.)

Project description

MorDL: Morphological Parser (POS, lemmata, NER etc.)

PyPI Version Python Version License: BSD-3

MorDL is a tool to organize a pipeline for complete morphological sentence parsing (POS-tagging, lemmatization, morphological feature tagging) and Named-entity recognition.

Scores (accuracy) on SynTagRus: UPOS: 99.15%; FEATS: 98.28% (tokens), 98.86% (tags); LEMMA: 99.36%. In all experiments we used seed=42. Some other seed values may help to achive better results. Models' hyperparameters are also allowed to tune.

The validation with the official evaluation script of CoNLL 2018 Shared Task:

  • For inference on the SynTagRus test corpus, when predicted fields were emptied and all other fields were stayed intact, the scores are the same as outlined above.
  • Serial inference with UPOS - FEATS - LEMMA taggers resulted with scores:
    • UPOS: 99.15%; UFeats: 97.75%; AllTags: 98.55; Lemmas: 98.57% for the taggers trained on the original SynTagRus corpus;
    • UPOS: 99.15%; UFeats: 97.76%; AllTags: 98.53; Lemmas: 98.58% for the taggers trained serially on the SynTagRus corpus processed by previous taggers (UPOS tagger for FEATS; UPOS and FEATS taggers for LEMMA).

For completeness, we included that script in our distribution, so you can use it for your model evaluation, too. To simplify it, we also made a wrapper mordl.conll18_ud_eval for it.

Installation

pip

MorDL supports Python 3.5 or later. To install via pip, run:

$ pip install mordl

If you currently have a previous version of MorDL installed, run:

$ pip install mordl -U

From Source

Alternatively, you can install MorDL from the source of this git repository:

$ git clone https://github.com/fostroll/mordl.git
$ cd mordl
$ pip install -e .

This gives you access to examples that are not included in the PyPI package.

Usage

Our taggers use separate models, so they can be used independently. But to achieve best results FEATS tagger uses UPOS tags during training. And LEMMA and NER taggers use both UPOS and FEATS tags. Thus, for a fully untagged corpus, the tagging pipeline is serially applying the taggers, like shown below (assuming that our goal is NER and we already have trained taggers of all types):

from mordl import UposTagger, FeatsTagger, NeTagger

tagger_u, tagger_f, tagger_n = UposTagger(), FeatsTagger(), NeTagger()
tagger_u.load('upos_model')
tagger_f.load('feats_model')
tagger_n.load('misc-ne_model')

tagger_n.predict(
    tagger_f.predict(
        tagger_u.predict('untagged.conllu')
    ), save_to='result.conllu'
)

Any tagger in our pipeline may be replaced with a better one if you have it. The weakness of separate taggers is that they take more space. If all models were created with BERT embeddings, and you load them in memory simultaneously, they may eat up to 9Gb on GPU. Or even more, if you use them as a part of a multiprocess server (for example, as a part of Flask application). In that case, during loading you have to use params device and dataset_device to distribute your models on various GPUs. Alternatively, if you need just to tag some corpus once, you may load models serially:

tagger = UposTagger()
tagger.load('upos_model')
tagger.predict('untagged.conllu', save_to='result_upos.conllu')
del tagger  # just for sure
tagger = FeatsTagger()
tagger.load('feats_model')
tagger.predict('result_upos.conllu', save_to='result_feats.conllu')
del tagger
tagger = NeTagger()
tagger_n.load('misc-ne_model')
tagger.predict('result_feats.conllu', save_to='result.conllu')
del tagger

Don't use identical names for input and output file names when you call the .predict() methods. Normally, there will be no problem, because the methods by default load all input file in memory before tagging. But if the input file is large, you may want to use split parameter for that the methods handle the file by parts. In that case, saving of the first part of the tagging data occurs before loading next. So, identical names will entail data loss.

Training process is also simple. If you have training corpora and you don't want any experiments, just run:

from mordl import UposTagger

tagger = UposTagger()
tagger.load_train_corpus(train_corpus)
tagger.load_test_corpus(dev_corpus)

stat = tagger.train('upos_model', device='cuda:0', word_emb_tune_params={})

It is training pipeline for the UPOS tagger; pipelines for other taggers are identical. If you want to train the model again without re-training word embeddings anew to possibly achieve better results, set the word_emb_tune_params to None.

For a more complete understanding of MorDL toolkit usage, refer to the Python notebook with pipeline examples in the examples directory of the MorDL GitHub repository. Also, the detailed descriptions are available in the docs:

MorDL Basics

Part of Speech Tagging

Single Feature Tagging

Multiple Feature Tagging

Lemmata Prediction

Named-entity Recognition

Supplements

This project was developed with a focus on Russian language, but a few nuances we used are unlikely to worsen the quality of processing other languages.

MorDL's supports CoNLL-U (if input/output is a file), or Parsed CoNLL-U (if input/output is an object). Also, MorDL's allows Corpuscula's corpora wrappers as input.

License

MorDL is released under the BSD License. See the LICENSE file for more details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mordl-1.0.22.tar.gz (59.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mordl-1.0.22-py3-none-any.whl (76.0 kB view details)

Uploaded Python 3

File details

Details for the file mordl-1.0.22.tar.gz.

File metadata

  • Download URL: mordl-1.0.22.tar.gz
  • Upload date:
  • Size: 59.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.45.0 CPython/3.8.2

File hashes

Hashes for mordl-1.0.22.tar.gz
Algorithm Hash digest
SHA256 c1d1a158fda9917c436b2d82e8335f6cd300ad0d727e78ea13039b5b8ddbd4f2
MD5 a85808bdec94d3f37a663475198201a4
BLAKE2b-256 e059a2909e94389817f0308582e52122811ddca04c05a2ecf45ee2f1ccead604

See more details on using hashes here.

File details

Details for the file mordl-1.0.22-py3-none-any.whl.

File metadata

  • Download URL: mordl-1.0.22-py3-none-any.whl
  • Upload date:
  • Size: 76.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.45.0 CPython/3.8.2

File hashes

Hashes for mordl-1.0.22-py3-none-any.whl
Algorithm Hash digest
SHA256 d4426edb57394363b5b2a180510805f89b296ac8f650c214efa517a7f0b3569b
MD5 ef490321735cb0dab4cfa2cc2d84fb73
BLAKE2b-256 3fda4b773543dfd91d65100de2029f82f858ac95bc1fbf974c9de40e90dac6f1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page