Skip to main content

A python module for word inflections designed for use with Spacy.

Project description

pyinflect

A python module for word inflections that works as a spaCy extension.

--> Note that a more sophisticated system now exists in LemmInflect which includes both lemmatization and inflection, along with more advanced methods for word form disambiguation. You might want to try that module first if you're looking for top performance.

This module is designed as an extension for spaCy and will return the the inflected form of a word based on a supplied Penn Treekbank part-of-speech tag. It can also be used a standalone module outside of Spacy. It is based on the Automatically Generated Inflection Database (AGID). The AGID data provides a list of inflections for various word lemma. See the scripts directory for utilities that make good examples or the tests directory for unit tests / examples.

Installation

pip3 install pyinflect

Usage as an Extension to Spacy

To use with Spacy, you need Spacy version 2.0 or later. Versions 1.9 and earlier do not support the extension methods used here.

To use as an extension to Spacy, first import the module. This will create a new inflect method for each spaCy Token that takes a Penn Treebank tag as its parameter. The method returns the inflected form of the token's lemma based on the supplied treekbank tag.

> import spacy
> import pyinflect
> nlp = spacy.load('en_core_web_sm')
> tokens = nlp('This is an example of xxtest.')
> tokens[3]._.inflect('NNS')
examples

When more than one spelling/form exists for the given tag, an optional form number can be supplied, otherwise the first one is returned.

> tokens[1]._.inflect('VBD', form_num=0)
was
> tokens[1]._.inflect('VBD', form_num=1)
were

When the lemma you wish to inflect is not in the lookup dictionary, the method returns None. The optional parameter inflect_oov can be used to inflect the word using regular inflection rules. In this case form_num=0 selects the "regular" inflection and form_num=1 selects the "doubled" version for verbs and adj/adv or the "Greco-Latin" for nouns.

> tokens[5]._.inflect('VBG', inflect_oov=True)
xxtesting
> tokens[5]._.inflect('VBG', inflect_oov=True, form_num=1)
xxtestting

You will need to figure out yourself which form_num to use. There are basic helper functions in pyinflect.InflectionRules which can make a guess if the lemma uses "doubling" or "Greco-Latin" style rules.

Usage Standalone

To use standalone, import the method getAllInflections and/or getInflection and call them directly. getAllInflections returns all entries in the infl.csv file as a dictionary of inflected forms, where each form entry is a tuple with one or more spellings/forms for a given treebank tag. The optional parameter pos_type (which is V, A or N) can be used to limited the returned data to specific parts of speech. The method getInflection takes a lemma and a Penn Treebank tag and returns a tuple of the specific inflection(s) associated with it.

> from pyinflect import getAllInflections, getInflection
> getAllInflections('watch')
{'NN': ('watch',), 'NNS': ('watches',), 'VB': ('watch',), 'VBP': ('watch',), 'VBD': ('watched',), 'VBN': ('watched',), 'VBG': ('watching',), 'VBZ': ('watches',)}

> getAllInflections('watch', pos_type='V')
{'VB': ('watch',), 'VBP': ('watch',), 'VBD': ('watched',), 'VBN': ('watched',), 'VBG': ('watching',), 'VBZ': ('watches',)}

> getInflection('watch', tag='VBD')
('watched',)

The method getInflection takes the parameter inflect_oov and uses it similarly to what is described above with spaCy.

> getInflection('xxtest', 'VBG', inflect_oov=True)
('xxtesting', 'xxtestting')

Issues:

If you find a bug, please report it on the GitHub issues list. However be aware that when in comes to returning the correct inflection there are a number of different types of issues that can arise. Some of these are not readily fixable. Issues with inflected forms include...

  • Multiple spellings for an inflection (ie.. arthroplasties, arthroplastyes or arthroplastys)
  • Mass form and plural types (ie.. people vs peoples)
  • Forms that depend on context (ie.. further vs farther)
  • Infections that are not fully specified by the tag (ie.. be/VBD can be "was" or "were")
  • Incorrect lemmatization from spaCy (ie.. hating -> hat')
  • Incorrect tagging (ie.. VBN vs. VBD)
  • Errors in the AGID database

In order to assure that pyInflect returns the most commonly used inflected form/spelling for a given tag, a corpus technique is used. In scripts/12_CreateOverridesList.py, words are lemmatized and tagged with spaCy then re-inflected with pyInflect. When the original corpus word differs from pyInflect, the most commonly seen form is written to the overrides.csv file. This technique can also help overcome lemmatization and tagging issues from spaCy and errors in the AGID database. The file CorpMultiInfls.txt is a list of inflections/tags that came from multiple words in the corpus and thus may be problematic.

One common issue is that some forms of the verb "be" are not completely specified by the treekbank tag. For instance be/VBD inflects to either "was" or "were" and be/VBP inflects to either "am", or "are". When the inflected form is ambiguous the first form is returned by default. Setting the form_num in the Spacy inflection method allows returning other form(s).

Note that the AGID data is created by a 3rd party and not maintained here. Some lemma are not in that data file, infl.csv, and thus can not be inflected using the dictionary methods. In some cases the AGID may not contain the best inflection of the word. For instance, lemma "people" with tag "NNS" will return "peoples" (pre-overrides) where you may want the word "people" which is also plural.

Tags:

The module determines the inflection(s) returned by either a pos_type or a Penn Treebank tag. The pos_type is either 'V', A' or 'N' for 'Verb', 'Adjective'/'Adverb' or 'Noun' respectively. A list of treebank tags can be found here. Not all of these are used by pyinflect. The following is a list of the various types and tags used...

pos_type = 'A'
* JJ      Adjective
* JJR     Adjective, comparative
* JJS     Adjective, superlative
* RB      Adverb
* RBR     Adverb, comparative
* RBS     Adverb, superlative

pos_type = 'N'
* NN      Noun, singular or mass
* NNS     Noun, plural

pos_type = 'V'
* VB      Verb, base form
* VBD     Verb, past tense
* VBG     Verb, gerund or present participle
* VBN     Verb, past participle
* VBP     Verb, non-3rd person singular present
* VBZ     Verb, 3rd person singular present
* MD      Modal

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyinflect-0.5.1.tar.gz (13.2 kB view details)

Uploaded Source

Built Distribution

pyinflect-0.5.1-py3-none-any.whl (703.5 kB view details)

Uploaded Python 3

File details

Details for the file pyinflect-0.5.1.tar.gz.

File metadata

  • Download URL: pyinflect-0.5.1.tar.gz
  • Upload date:
  • Size: 13.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.2 requests-toolbelt/0.8.0 tqdm/4.40.2 CPython/3.6.9

File hashes

Hashes for pyinflect-0.5.1.tar.gz
Algorithm Hash digest
SHA256 b512834f6fecc56d7aa10c9a40dca1e9fb66fbb4d72ffc55760d58e53070ba0a
MD5 43f2a2ad53edb9a99d3b880610463fce
BLAKE2b-256 4e6b2b4857746fe3362258b2842184ae0ad11bc1259ae4bc0ed49d0ea6b22137

See more details on using hashes here.

File details

Details for the file pyinflect-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: pyinflect-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 703.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.2 requests-toolbelt/0.8.0 tqdm/4.40.2 CPython/3.6.9

File hashes

Hashes for pyinflect-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 94f052d48939bd4748942d807b2b87f15dcb94acb32b99d7162dba5d37188ea8
MD5 46bf459109475cfefb4e4cfce04ddc21
BLAKE2b-256 48ca123642f8be91a61cb7121c6dd9794a2ddedc36c381ae0a44fd3f5cd041b1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page