Skip to main content

Rule-based, linguist-friendly (and rather slow) morphological analysis

Project description

uniparser-morph

This is yet another rule-based morphological analysis tool. No built-in rules are provided; you will have to write some if you want to parse texts in your language. Uniparser-morph was developed primarily for under-resourced languages, which don't have enough data for training statistical parsers. Here's how it's different from other similar tools:

  • It is designed to be usable by theoretical linguists with no prior knowledge of NLP (and has been successfully used by them with minimal guidance). So it's not just another way of defining an FST; the way you describe lexemes and morphology resembles what you do in a traditional theoretical description, at least in part.
  • It was developed with a large variety of linguistic phenomena in mind and is easily applicable to most languages -- not just the Standard Average European.
  • Apart from POS-tagging and full morphological tagging, there is a glossing option (words can be split into morphemes).
  • Lexemes can carry any number of attributes that have to end up in the annotation, e.g. translations into the metalanguage.
  • Ambiguity is allowed: all words you analyze will receive all theoretically possible analyses regardless of the context. (You can then use e.g. CG for rule-based disambiguation.)
  • While, in computational terms, the language described by uniparser-morph rules is certainly regular, the description is actually NOT entirely converted into an FST. Therefore, it's not nearly as fast as FST-based analyzers. The speed varies depending on the language structure and hardware characteristics, but you can hardly expect to parse more than 20,000 words per second. For heavily polysynthetic languages that figure can go as low as 200 words per second. So it's not really designed for industrial use.

The primary usage scenario I was thinking about is the following:

  • You have a corpus of texts where you want to add morphological annotation (this includes POS-tagging).
  • You manually prepare a grammar for the language in uniparser-morph format (probably making use of existing digital dictionaries of the language).
  • You compile a list of unique words in your corpus and parse it.
  • Then you annotate your texts based on this wordlist with any software you want.

Of course, you can do other things with uniparser-morph, e.g. make it a part of a more complex NLP pipeline; just make sure low speed is not an issue in your case.

uniparser-morph is distributed under the MIT license (see LICENSE).

Usage

Import the Analyzer class from the package. Here is a basic usage example:

from uniparser_morph import Analyzer
a = Analyzer()

# Put your grammar files in the current folder or set paths as properties of the Analyzer class (see below)
a.load_grammar()

analyses = a.analyze_words('Морфологиез')
# The parser is initialized before first use, so expect some delay here (usually several seconds)
# You will get a list of Wordform objects

# You can also pass lists (even nested lists) and specify output format ('xml' or 'json'):
analyses = a.analyze_words([['А'], ['Мон', 'тонэ', 'яратӥсько', '.']], format='xml')
analyses = a.analyze_words(['Морфологиез', [['А'], ['Мон', 'тонэ', 'яратӥсько', '.']]], format='json')

If you need to parse a frequency list, use analyze_wordlist() instead.

See the documentation for the full list of options.

Format

If you want to create a uniparser-morph analyzer for your language, you will have to write a set of rules that describe the vocabulary and the morphology of your language in uniparser-morph format. For the description of the format, refer to documentation .

Disambiguation with CG

If you have disambiguation rules in the Constraint Grammar format, you can use them in the following way when calling analyze_words():

analyses = a.analyze_words(['Мон', 'морфологиез', 'яратӥсько', '.'],
                           cgFile=os.path.abspath('disambiguation.cg3'),
                           disambiguate=True)

In order for this to work, you have to install the cg3 executable separately. On Ubuntu/Debian, you can use apt-get:

sudo apt-get install cg3

On Windows, download the binary and add the path to the PATH environment variable. See the documentation for other options.

Note that each time you call analyze_words() with disambiguate=True, the CG grammar is loaded and compiled from scratch, which makes the analysis even slower. If you are analyzing a large text, it would make sense to pass the entire text contents in a single function call rather than do it sentence-by-sentence, for optimal performance.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

uniparser-morph-2.7.5.tar.gz (53.3 kB view details)

Uploaded Source

Built Distribution

uniparser_morph-2.7.5-py3-none-any.whl (58.3 kB view details)

Uploaded Python 3

File details

Details for the file uniparser-morph-2.7.5.tar.gz.

File metadata

  • Download URL: uniparser-morph-2.7.5.tar.gz
  • Upload date:
  • Size: 53.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.28.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.7

File hashes

Hashes for uniparser-morph-2.7.5.tar.gz
Algorithm Hash digest
SHA256 a6aeb7466e08ac5447c4625372e77d903620d637d1c7d2dd9a1fbdc444991ab7
MD5 0baac4d1ba9554017d5e826c1424b5e0
BLAKE2b-256 713f806fe50cb085c3b7a7fbdaac5cef9947a0ac8443635e06059d4d5895d686

See more details on using hashes here.

File details

Details for the file uniparser_morph-2.7.5-py3-none-any.whl.

File metadata

  • Download URL: uniparser_morph-2.7.5-py3-none-any.whl
  • Upload date:
  • Size: 58.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.28.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.7

File hashes

Hashes for uniparser_morph-2.7.5-py3-none-any.whl
Algorithm Hash digest
SHA256 b162016e03d593db8f8faa85ad70b5e8668c87577922f0040c7a9ae7df35d674
MD5 595e7b9143f55456df49d1c4f5be676c
BLAKE2b-256 a3d775ce7c1819520697a57ba6961176cd44d5c168b0c6661ca39fda241ad269

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page