Skip to main content

Functions to preprocess and normalize text.

Project description

clean-text Build Status PyPI PyPI - Python Version PyPI - Downloads

User-generated content on the Web and in social media is often dirty. Preprocess your scraped data with clean-text to create a normalized text representation. For instance, turn this corrupted input:

A bunch of \\u2018new\\u2019 references, including [Moana](https://en.wikipedia.org/wiki/Moana_%282016_film%29).


»Yóù àré     rïght <3!«

into this clean output:

A bunch of 'new' references, including [moana](<URL>).

"you are right <3!"

clean-text uses ftfy, unidecode and numerous hand-crafted rules, i.e., RegEx.

Installation

To install the GPL-licensed package unidecode alongside:

pip install clean-text[gpl]

You may want to abstain from GPL:

pip install clean-text

NB: This package is named clean-text and not cleantext.

If unidecode is not available, clean-text will resort to Python's unicodedata.normalize for transliteration. Transliteration to closest ASCII symbols involes manually mappings, i.e., ê to e. unidecode's mapping is superiour but unicodedata's are sufficent. However, you may want to disable this feature altogether depending on your data and use case.

To make it clear: There are inconsistencies between processing text with or without unidecode.

Usage

from cleantext import clean

clean("some input",
    fix_unicode=True,               # fix various unicode errors
    to_ascii=True,                  # transliterate to closest ASCII representation
    lower=True,                     # lowercase text
    no_line_breaks=False,           # fully strip line breaks as opposed to only normalizing them
    no_code=False,                  # replace all code snippets with a special token
    no_urls=False,                  # replace all URLs with a special token
    no_emails=False,                # replace all email addresses with a special token
    no_phone_numbers=False,         # replace all phone numbers with a special token
    no_ip_addresses=False,          # replace all IP addresses with a special token
    no_file_paths=False,            # replace all file paths with a special token
    no_numbers=False,               # replace all numbers with a special token
    no_digits=False,                # replace all digits with a special token
    no_currency_symbols=False,      # replace all currency symbols with a special token
    no_punct=False,                 # remove punctuations
    replace_with_punct="",          # instead of removing punctuations you may replace them
    exceptions=None,                # list of regex patterns to preserve verbatim
    replace_with_code="<CODE>",
    replace_with_url="<URL>",
    replace_with_email="<EMAIL>",
    replace_with_phone_number="<PHONE>",
    replace_with_ip_address="<IP>",
    replace_with_file_path="<FILE_PATH>",
    replace_with_number="<NUMBER>",
    replace_with_digit="0",
    replace_with_currency_symbol="<CUR>",
    lang="en"                       # set to 'de' for German special handling
)

Carefully choose the arguments that fit your task. The default parameters are listed above.

Preserving patterns with exceptions

Use exceptions to protect specific text patterns from being modified during cleaning. Each entry is a regex pattern string; all matches are preserved verbatim (not lowered, not transliterated — exactly as they appeared in the input).

from cleantext import clean

# Preserve a literal compound word while removing other punctuation
clean("drive-thru and text---cleaning", no_punct=True, exceptions=["drive-thru"])
# => 'drive-thru and textcleaning'

# Preserve all hyphenated compound words using a regex
clean("drive-thru and pick-up", no_punct=True, exceptions=[r"\w+-\w+"])
# => 'drive-thru and pick-up'

# Multiple exception patterns
clean("drive-thru costs $5", no_punct=True, no_currency_symbols=True,
      exceptions=[r"\w+-\w+", r"\$\d+"])
# => 'drive-thru costs $5'

You may also only use specific functions for cleaning. For this, take a look at the source code.

Cleaning multiple texts in parallel

Use clean_texts() to clean a list of strings. Set n_jobs to enable parallel processing via Python's built-in multiprocessing:

from cleantext import clean_texts

# Sequential (default) — no multiprocessing overhead
clean_texts(["text one", "text two", "text three"])

# Use all available CPU cores
clean_texts(["text one", "text two", "text three"], n_jobs=-1)

# Use a specific number of workers
clean_texts(["text one", "text two", "text three"], n_jobs=4)

# All clean() keyword arguments are supported
clean_texts(texts, n_jobs=-1, no_urls=True, lang="de", lower=False)

n_jobs semantics:

  • 1 or None — sequential processing (default, zero overhead)
  • -1 — use all available CPU cores
  • -2 — use all cores except one, etc.
  • Any positive integer — use exactly that many workers
  • 0 — raises ValueError

Supported languages

So far, only English and German are fully supported. It should work for the majority of western languages. If you need some special handling for your language, feel free to contribute. 🙃

Using clean-text with scikit-learn

There is also scikit-learn compatible API to use in your pipelines. All of the parameters above work here as well.

pip install clean-text[gpl,sklearn]
pip install clean-text[sklearn]
from cleantext.sklearn import CleanTransformer

cleaner = CleanTransformer(no_punct=False, lower=False)

cleaner.transform(['Happily clean your text!', 'Another Input'])

Development

Use poetry.

See RELEASING.md for how to publish a new version.

Contributing

If you have a question, found a bug or want to propose a new feature, have a look at the issues page.

Pull requests are especially welcomed when they fix bugs or improve the code quality.

If you don't like the output of clean-text, consider adding a test with your specific input and desired output.

Related Work

Generic text cleaning packages

Full-blown NLP libraries with some text cleaning

Remove or replace strings

Detect dates

Clean massive Common Crawl data

Acknowledgements

Built upon the work by Burton DeWilde for Textacy.

License

Apache

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

clean_text-0.7.1.tar.gz (16.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

clean_text-0.7.1-py3-none-any.whl (15.3 kB view details)

Uploaded Python 3

File details

Details for the file clean_text-0.7.1.tar.gz.

File metadata

  • Download URL: clean_text-0.7.1.tar.gz
  • Upload date:
  • Size: 16.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.0 CPython/3.12.2 Darwin/24.6.0

File hashes

Hashes for clean_text-0.7.1.tar.gz
Algorithm Hash digest
SHA256 44805f001bba23467a0f993f1144b51823d66c5284f18574d4ac08594bffbf42
MD5 dabf89e070d3717d8c38c6b78d820cf0
BLAKE2b-256 4163297781d9d53c5f86abe11f4f3784c4c472ae2ec5c1005f414fead2232d9f

See more details on using hashes here.

File details

Details for the file clean_text-0.7.1-py3-none-any.whl.

File metadata

  • Download URL: clean_text-0.7.1-py3-none-any.whl
  • Upload date:
  • Size: 15.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.0 CPython/3.12.2 Darwin/24.6.0

File hashes

Hashes for clean_text-0.7.1-py3-none-any.whl
Algorithm Hash digest
SHA256 46cc585d2d1a483afac47e10c2738ed5ee8de000da267d3988a590011a546815
MD5 26b526251bce5bb18f12cda9edebb016
BLAKE2b-256 26f917a785655fc85d3948433a8159c3d73b03c7406e311f0643bd24a1cf0359

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page