Generate sets of english words by combining different word lists
Project description
english-words-py
Returns sets of English words created by combining different words lists together. Example usage: to get a set of English words from the "web2" word list, including only lower-case letters, you write the following:
>>> from english_words import get_english_words_set
>>> web2lowerset = get_english_words_set(['web2'], lower=True)
Usage
From the main package, import get_english_words_set
as demonstrated
above. This function takes a number of arguments; the first is a list of
word list identifiers for the word lists to combine and the rest are
flags. These arguments are described here (in the following order):
sources
is an iterable containing strings corresponding to word list identifiers (see "Word lists" subsection below)alpha
(defaultFalse
) is a flag specifying that all non-alphanumeric characters (e.g.:-
,'
) should be strippedlower
(defaultFalse
) is a flag specifying that all upper-case letters should be converted to lower-case
Each word list is pre-processed to handle the above flags, so using any combination of options will not cause the function to run slower.
Note that some care needs to be used when combining word lists. For
example, only proper nouns in the web2
word list are capitalized, but
most words in the gcide
word list are capitalized.
Word lists
Name/URL | Identifier | Notes |
---|---|---|
GCIDE 0.53 index | gcide |
Words found in GNU Collaborative International Dictionary of English 0.53. Most words capitalized (not exactly sure what the capitalization convention is). Contains some entries with multiple words (currently you must use the alpha option to exclude these). Unicode characters are currently unprocessed; for example <ae/ is present in the dictionary instead of æ . Ideally, these should all be converted. |
web2 revision 326913 | web2 |
Adding additional word lists
To add a word list, say with identifier x
, put the word list (one word
per line), into a plain text file x.txt
in the raw_data
directory at the root of the repository. Then, to process the word list
(and all others in the directory) run the script
process_raw_data.py
.
Installation
Install this with pip with
pip install english-words
This package is unfortunately rather large (~20MB), and will run into scaling issues if more word lists or (especially) options are added. When that bridge is crossed, word lists should possibly be chosen by the user instead of simply including all of them; word lists could also be preprocessed on the client side instead of being included in the package.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file english-words-2.0.1.tar.gz
.
File metadata
- Download URL: english-words-2.0.1.tar.gz
- Upload date:
- Size: 8.2 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 colorama/0.4.4 importlib-metadata/4.6.4 keyring/23.5.0 pkginfo/1.8.2 readme-renderer/34.0 requests-toolbelt/0.9.1 requests/2.25.1 rfc3986/1.5.0 tqdm/4.57.0 urllib3/1.26.5 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a4105c57493bb757a3d8973fcf8e1dc05e7ca09c836dff467c3fb445f84bc43d |
|
MD5 | e97e8f897a5897d18965cb1cf5136179 |
|
BLAKE2b-256 | 94d178b51ad44e4a318ee4f6d32a0b344a918d5fd690de0b0ff6a116b1bc97cf |