Clean Your Text to Create Normalized Text Representations
Project description
clean-text
Clean your text with clean-text
to create normalized text representations. For instance, turn this corrupted input:
A bunch of \\u2018new\\u2019 references, including [Moana](https://en.wikipedia.org/wiki/Moana_%282016_film%29).
»Yóù àré rïght <3!«
into this clean output:
A bunch of 'new' references, including [moana](<URL>).
"you are right <3!"
clean-text
uses ftfy, unidecode and numerous hand-crafted rules, i.e., RegEx.
Installation
To install the GPL-licensed package unidecode alongside:
pip install clean-text[gpl]
You may want to abstain from GPL:
pip install clean-text
If unidecode is not available, clean-text
will resort to Python's unicodedata.normalize for transliteration.
Transliteration to closest ASCII symbols involes manually mappings, i.e., ê
to e
. Unidecode's hand-crafted mapping is superiour but unicodedata's are sufficent.
However, you may want to disable this feature altogether depening on your data and use case.
Usage
from cleantext import clean
clean("some input",
fix_unicode=True, # fix various unicode errors
to_ascii=True, # transliterate to closest ASCII representation
lower=True, # lowercase text
no_line_breaks=False, # fully strip line breaks as opposed to only normalizing them
no_urls=False, # replace all URLs with a special token
no_emails=False, # replace all email addresses with a special token
no_phone_numbers=False, # replace all phone numbers with a special token
no_numbers=False, # replace all numbers with a special token
no_digits=False, # replace all digits with a special token
no_currency_symbols=False, # replace all currency symbols with a special token
no_punct=False, # fully remove punctuation
replace_with_url="<URL>",
replace_with_email="<EMAIL>",
replace_with_phone_number="<PHONE>",
replace_with_number="<NUMBER>",
replace_with_digit="0",
replace_with_currency_symbol="<CUR>",
lang="en" # set to 'de' for German special handling
)
Carefully choose the arguments that fit your task. The default parameters are listed above. Whitespace is always normalized.
You may also only use specific functions for cleaning. For this, take a look at the source code.
So far, only English and German are fully supported. It should work for the majority of Western languages. If you need some special handling for you language, feel free to contribute. 🙃
Development
- install Pipenv
- get the package:
git clone https://github.com/jfilter/clean-text && cd clean-text && pipenv install
- run tests:
pipenv run pytest
Contributing
If you have a question, found a bug or want to propose a new feature, have a look at the issues page.
Pull requests are especially welcomed when they fix bugs or improve the code quality.
If you don't like the output of clean-text
, consider adding a test with your specific input and desired output.
Related Work
Acknowledgements
Built upon the work by Burton DeWilde's for Textacy.
License
Apache
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for clean_text-0.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c90bcd27aefbaf9656c9ebcc18c60deaa01ee1dedea0f6b9474c9de4b19ed83d |
|
MD5 | d660eb4050eed4fa1d24f2c9edad1403 |
|
BLAKE2b-256 | 23982650271bc1052002ad7e61595f7a44ff24f6bb4eb24d9c0e42e92c991708 |