Functions to preprocess and normalize text.
User-generated content on the Web and in social media is often dirty. Preprocess your scraped data with
clean-text to create a normalized text representation. For instance, turn this corrupted input:
A bunch of \\u2018new\\u2019 references, including [Moana](https://en.wikipedia.org/wiki/Moana_%282016_film%29). »Yóù àré rïght <3!«
into this clean output:
A bunch of 'new' references, including [moana](<URL>). "you are right <3!"
To install the GPL-licensed package unidecode alongside:
pip install clean-text[gpl]
You may want to abstain from GPL:
pip install clean-text
NB: This package is named
clean-text and not
If unidecode is not available,
clean-text will resort to Python's unicodedata.normalize for transliteration.
Transliteration to closest ASCII symbols involes manually mappings, i.e.,
unidecode's mapping is superiour but unicodedata's are sufficent.
However, you may want to disable this feature altogether depending on your data and use case.
To make it clear: There are inconsistencies between processing text with or without
from cleantext import clean clean("some input", fix_unicode=True, # fix various unicode errors to_ascii=True, # transliterate to closest ASCII representation lower=True, # lowercase text no_line_breaks=False, # fully strip line breaks as opposed to only normalizing them no_urls=False, # replace all URLs with a special token no_emails=False, # replace all email addresses with a special token no_phone_numbers=False, # replace all phone numbers with a special token no_numbers=False, # replace all numbers with a special token no_digits=False, # replace all digits with a special token no_currency_symbols=False, # replace all currency symbols with a special token no_punct=False, # remove punctuations replace_with_punct="", # instead of removing punctuations you may replace them replace_with_url="<URL>", replace_with_email="<EMAIL>", replace_with_phone_number="<PHONE>", replace_with_number="<NUMBER>", replace_with_digit="0", replace_with_currency_symbol="<CUR>", lang="en" # set to 'de' for German special handling )
Carefully choose the arguments that fit your task. The default parameters are listed above.
You may also only use specific functions for cleaning. For this, take a look at the source code.
So far, only English and German are fully supported. It should work for the majority of western languages. If you need some special handling for your language, feel free to contribute. 🙃
There is also scikit-learn compatible API to use in your pipelines. All of the parameters above work here as well.
pip install clean-text[gpl,sklearn] pip install clean-text[sklearn]
from cleantext.sklearn import CleanTransformer cleaner = CleanTransformer(no_punct=False, lower=False) cleaner.transform(['Happily clean your text!', 'Another Input'])
If you have a question, found a bug or want to propose a new feature, have a look at the issues page.
Pull requests are especially welcomed when they fix bugs or improve the code quality.
If you don't like the output of
clean-text, consider adding a test with your specific input and desired output.
Generic text cleaning packages
Full-blown NLP libraries with some text cleaning
Remove or replace strings
Clean massive Common Crawl data
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for clean_text-0.6.0-py3-none-any.whl