Skip to main content

A simple program for calcuating lexical diversity

Project description

To install using pip:

pip install lexical-diversity

Get started:

>>> from lexical_diversity import lex_div as ld

Pre-processing texts:

For convenience, a user can tokenize texts using the tokenize() function or by using a predefined tokenize function (e.g., from NLTK):

>>> text = """The state was named for the Colorado River, which Spanish travelers named the Río Colorado for the ruddy silt the river carried from the mountains. The Territory of Colorado was organized on February 28, 1861, and on August 1, 1876, U.S. President Ulysses S. Grant signed Proclamation 230 admitting Colorado to the Union as the 38th state. Colorado is nicknamed the "Centennial State" because it became a state a century after the signing of the United States Declaration of Independence. Colorado is bordered by Wyoming to the north, Nebraska to the northeast, Kansas to the east, Oklahoma to the southeast, New Mexico to the south, Utah to the west, and touches Arizona to the southwest at the Four Corners. Colorado is noted for its vivid landscape of mountains, forests, high plains, mesas, canyons, plateaus, rivers, and desert lands. Colorado is part of the western or southwestern United States, and one of the Mountain States. Denver is the capital and most populous city of Colorado. Residents of the state are known as Coloradans, although the antiquated term "Coloradoan" is occasionally used."""

>>> tok = ld.tokenize(text)
>>> print(tok[:10])
['the', 'state', 'was', 'named', 'for', 'the', 'colorado', 'river', 'which', 'spanish']

For convenience, you can also lemmatize the texts using the simple flemmatize() function, which is not part of speech specific ('run' as a noun and 'run' as a verb are treated as the same word). However, it is likely better to use a part of speech sensitive lemmatizer (e.g., using spaCy).

>>> flt = ld.flemmatize(text)
>>> print(flt[:10])
['the', 'state', 'be', 'name', 'for', 'the', 'colorado', 'river', 'which', 'spanish']  

Calculating lexical diversity:

Simple TTR

>>> ld.ttr(flt)

Root TTR

>>> ld.root_ttr(flt)


>>> ld.log_ttr(flt)

Mass TTR

>>> ld.maas_ttr(flt)

Mean segmental TTR (MSTTR)

By default, the segment size is 50 words. However, this can be customized using the window_length argument.

>>> ld.msttr(flt)

>>> ld.msttr(flt,window_length=25)

Moving average TTR (MATTR)

By default, the window size is 50 words. However, this can be customized using the window_length argument.

>>> ld.mattr(flt)

>>> ld.mattr(flt,window_length=25)

Hypergeometric distribution D (HDD)

A more straightforward and reliable implementation of vocD (Malvern, Richards, Chipere, & Duran, 2004) as per McCarthy and Jarvis (2007, 2010).

>>> ld.hdd(flt)

Measure of lexical textual diversity (MTLD)

Calculates MTLD based on McCarthy and Jarvis (2010).


Measure of lexical textual diversity (moving average, wrap)

Calculates MTLD using a moving window approach. Instead of calculating partial factors, it wraps to the beginning of the text to complete the last factors.


Measure of lexical textual diversity (moving average, bi-directional)

Calculates the average MTLD score by calculating MTLD in each direction using a moving window approach.


Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lexical_diversity-0.1.1.tar.gz (119.6 kB view hashes)

Uploaded source

Built Distribution

lexical_diversity-0.1.1-py3-none-any.whl (117.8 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page