Skip to main content

Determine the number of unique words in a given text/string

Project description

eigolingo

Using eigolingo you can answer the question "How many unique words are in this book/text/novel?" in under a second.

This repository contains the resources necessary to generate English wordlists (namely wordlist70, 80, and 95.txt) and inflection dictionaries (dict70, 80 adn 95.txt). To see how they are generated feel free to open the notebook in Google Colab and modify the process as you please.

Open In Colab

Motivation

It all started with the simple question: "How many unique words are there in {novel}". While fiddling out ways to answer this question I've encountered several obstacles which ultimately led me to create the wordlists and dictionaries. Some of these obstacles include:

  1. Do inflected forms count separately?
  2. To what extend should proper nouns be included?
  3. Should words denoting origin (e.g. French, Swedish) count?
  4. What about denominal verbs?
  5. Should one include words the author made up (e.g. ungood, doublethink)?
  6. ...

Therefore I decided to create wordlists that define what an "allowed" English word is, but also deemed it necessary to create multiple lists of varying magnitude following different considerations, which is denoted by the number in the filename and using those wordlists inflection dictionaries were created which map inflections to the lemma.

Result

In its current form, the question the eigolingo answers is closer to: "How many unique words does one need to know to understand the novel?" This is due to how I decided to answer the posed questions. The inflection dictionary is used to determine the lemma of a token(word). In the creation of the dictionary several filters are applied which do (or should) remove proper nouns to a large extent. I did this as I currently believe that most proper nouns are not integral to understanding the novel and therefore don't carry the weight of what I wanted to call a unique word. This is, of course, my own opinion and the filtering can easily be excluded if deemed necessary.

To illustrate the workings I will use George Orwell's Novel - 1984, which recently fell into the public domain in most countries.
Using the varying dictionary sizes we receive:

Dict Size Unique Tokens Tokens Counted Total Tokens
70 6158 98.41% 104759
80 6166 98.44% 104759
95 6215 98.85% 104759

The 8 additional tokens counted for 80 are:
eard, sye, versificator, stepney, mache, mediterranean, schoolmasterish

Part of the additional words in the 95 dictionary vs 80 are:
russian, cromwell, noctis, brazilian, undark, dane, eurasian, african, jewish, chinese, australian, milton, leopoldville, ungood

Part of the tokens not counted:
doubleplusungood, japan, minitrue, baal, doubleplusgood, sunday, doubleplus, russia, uncold, brazzaville, byron, thoughtcriminals, goldstein, presia, adam, tibet, rhine

This also includes typos inside the book and errors caused by my pre-processing:
lslands (first letter is a L not an i), ealth ('ealth as dialect), humand ("machine did raise the living standards of the average humand being very greatly over a period of about fifty years")

In this case, going from 70 to 80 counts just 8 additional tokens. The interesting bit lies in the jump from 80 to 95. Nearly 50 additional tokens and, as one can see, a lot of them are adjectives denoting origin. I don't actively filter them, it just so happens that only the 95 dictionary includes them. Not-counted are, of course, the proper nouns that I actively filter for but also most words made up by Orwell fall into this category. In total, 206 tokens are not considered by the script.

Usage

It's as easy as it gets. The dictionary size defaults to 70.

python eigolingo.py [textfile] [dictionary size]

e.g.

python eigolingo.py 1984.txt 70
python eigolingo.py 1984.txt 80
python eigolingo.py 1984.txt 95

License

Please check the licenses of the wordlists used. The necessary sources can be found in the README in the lists directory.

Check the MIT-like License of this repository.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eigolingo-0.0.3.tar.gz (3.1 MB view details)

Uploaded Source

Built Distribution

eigolingo-0.0.3-py3-none-any.whl (3.1 MB view details)

Uploaded Python 3

File details

Details for the file eigolingo-0.0.3.tar.gz.

File metadata

  • Download URL: eigolingo-0.0.3.tar.gz
  • Upload date:
  • Size: 3.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.5 CPython/3.8.3 Windows/10

File hashes

Hashes for eigolingo-0.0.3.tar.gz
Algorithm Hash digest
SHA256 d203c7595d2e919c0e2dbcd3bb2b674fd99e7c22e4ac11a908dafc26f2681d6e
MD5 fd650769110c156104f06341f093f9b4
BLAKE2b-256 a948cf66508eeb53d2852827a8b261a6002213d7b747e625ed6adb5df11bad9b

See more details on using hashes here.

File details

Details for the file eigolingo-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: eigolingo-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 3.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.5 CPython/3.8.3 Windows/10

File hashes

Hashes for eigolingo-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 b6b0350c3d5f45d566a8ae08630852502a1ca2b40e51700ea29f06ce9de3abbc
MD5 21ce04226f36e00c4346f443e2b22848
BLAKE2b-256 1eea8ef38a397d55b9379c195dac2c70626736677d5da39705db71f1fd997160

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page