Probabilistically split concatenated words. Now with more functionality and languages!
Project description
Wordninja-Enhanced
Split your merged words!
ℹ About
This is a fork of the popular wordninja repository and improves on it in several aspects.
The language support was extendend to the following languages out of the box:
- English (en)
- German (de)
- French (fr)
- Italian (it)
- Spanish (es)
- Portuguese (pt)
More functionalities were added aswell.
- A new rejoin() function was created. It splits merged words in a sentence and returns the whole sentence with the corrected words while retaining spacing rules for punctuation characters.
- A candidates() function was added that returns not only one result, but instead several results sorted py their cost.
- It is now possible to specify additional words that should be added to the dictionary or words that should be excluded while initializing the LanguageModel.
- Hyphenated words are now also supported.
- The algorithm now also preserves punctuation while spitting merged words and does no longer break down when encountering unknown characters.
More info about these functionalities can be found further down in the usage section.
How to Install
pip install wordninja-enhanced
Usage
The functionalities are explained in the following code snippet:
import wordninja_enhanced as wordninja
# This function splits merged words for you
split_text= wordninja.split("Splitthesemergedwordsforme")
print(f"Example 1: {split_text}")
# This function gives you several candidates how the input could be split,
# sorted by the lowest cost.
# The second argument specifies the number of candidates to return
candidates_list = wordninja.candidates("derekanderson", 3)
print("Example 2:")
for i, candidate in enumerate(candidates_list):
print(f"candidate {i+1}: {candidate}")
# This function splits merged words and returns the correctly splitted string,
# while applying correct spacing rules for punctuation characters.
rejoined_text = wordninja.rejoin("That'sthesheriff's\"badge\" youarewearing!")
print(f"Example 3: {rejoined_text}")
# Without any further arguments the default language is set to english
lm = wordninja.LanguageModel()
joined_text = lm.rejoin("Thisisanothermergedtextexample.")
print(f"Example 4: {joined_text}")
# You can use another language by specifying it via the language parameter.
lm = wordninja.LanguageModel(language='de')
joined_text = lm.rejoin("Wiegehtesdir?")
print(f"Example 5: {split_text}")
# The LanguageModel also allows you to use your own dictionary when the language
# is specfied as 'custom'. It also allows you to specify additional words that
# are maybe missing from the dictionary and it also allows you to specify words
# that should not be split via a blacklist.
# The add_to_top parameter controls if the additional words will be added
# to the top of the dictionary (more likely to be split) or to the bottom.
custom_lm = wordninja.LanguageModel(language='custom',
word_file=r'path\to\your\custom_dict.txt.gz',
add_words=[],
blacklist=[],
add_to_top=True # Default false
)
The output from the 5 examples is the following:
Example 1: ['Split', 'these', 'merged', 'words', 'for', 'me']
Example 2:
candidate 1: ['derek', 'anderson']
candidate 2: ['derek', 'anders', 'on']
candidate 3: ['derek', 'and', 'ers', 'on']
Example 3: That's the sheriff's "badge" you are wearing!
Example 4: This is another merged text example.
Example 5: Wie geht es dir?
It can also handle long strings:
>>> wordninja.split('wethepeopleoftheunitedstatesinordertoformamoreperfectunionestablishjusticeinsuredomestictranquilityprovideforthecommondefencepromotethegeneralwelfareandsecuretheblessingsoflibertytoourselvesandourposteritydoordainandestablishthisconstitutionfortheunitedstatesofamerica')
['we', 'the', 'people', 'of', 'the', 'united', 'states', 'in', 'order', 'to', 'form', 'a', 'more', 'perfect', 'union', 'establish', 'justice', 'in', 'sure', 'domestic', 'tranquility', 'provide', 'for', 'the', 'common', 'defence', 'promote', 'the', 'general', 'welfare', 'and', 'secure', 'the', 'blessings', 'of', 'liberty', 'to', 'ourselves', 'and', 'our', 'posterity', 'do', 'ordain', 'and', 'establish', 'this', 'constitution', 'for', 'the', 'united', 'states', 'of', 'america']
Further notes
The files and the script to create the dictionaries are also included in the Dictionaries folder. They can be created by just running the script 'create_dictionaries.py'.
If you are interested in adding support for another language feel free to add your language to the language_config in the script and to create a corresponding corpus folder with the language data.
Acknowledgements
The dictionaries were created using the Leipzig Corpora Collection. Without their work this project would have not been possible.
D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages. In: Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wordninja_enhanced-3.0.1.tar.gz.
File metadata
- Download URL: wordninja_enhanced-3.0.1.tar.gz
- Upload date:
- Size: 21.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0b7b4b4ffb4af6ef8aadcd4a93e6788ff3d6407353bdfe10b63a6cbaec8aa439
|
|
| MD5 |
947ab59bc9476cf9ca27020d267c41fc
|
|
| BLAKE2b-256 |
6b72c28290761ce0c77a224f7d5ea9e1de27617da7a80962b3ea0fc49835ce22
|
File details
Details for the file wordninja_enhanced-3.0.1-py3-none-any.whl.
File metadata
- Download URL: wordninja_enhanced-3.0.1-py3-none-any.whl
- Upload date:
- Size: 21.6 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
03b015b34b64fedae649be9910d5fda2d5180e116821b25dfa65e8aa068280c9
|
|
| MD5 |
a9bf9d7fe641be468d020d5365b7a98a
|
|
| BLAKE2b-256 |
deb4404dc4064f8ee2790a984f9aab091be4a09f7392e2c8a0998eef735573e7
|