Skip to main content

All in one text processor and cleaner.

Project description

All-in-one Text Cleaner

This package was created to speed up the process of cleaning text for natural language processing and machine learning. The package does the following:

  • Converts all text to lowercase
  • Expands contractions using pycontractions trained on the glove-twitter-100 word2vec training set (optional)
  • Removes text in brackets. Matches "()","[]", or "{}" (optional)
  • Combines concatenations (turns "georgetown-louisville" into "georgetown louisville" or "georgetownlousivelle"). Matches all types of hyphens.
  • Very smartly splits sentences on punctuation using algorithm defined in this stackoverflow post.
  • Tokenizes sentences.
  • Lemmatizes tokens using NLTK WordNetLemmatizer and a lookup table between Penn Bank tags and Word Net.

Installation

$ pip3 install aiotext

Usage:

from aiotext import Cleaner

text = "Call me Ishmael. Some years ago—never mind how long precisely—having "
text += "little or no money in my purse, and nothing particular to interest me "
text += "on shore, I thought I would sail about a little and see the watery part "
text += "of the world. It is a way I have of driving off the spleen and "
text += "regulating the circulation."

# Initialize cleaner
cleaner_options = {
    # If true, contractions will be expanded (it's -> it is)
    # This takes a long time. Especially the first time you run it
    "expand_contractions": False,

    # if true removes text in brackets
    # if false the brackets will be removed, but text inside will remain
    "strip_text_in_brackets": False,

    # if false replaces hyphen with space (george-louis -> george louis).
    # if true just replaces hyphen (george-louis -> georgelouis)
    "combine_concatenations": False,  
}
cleaner = Cleaner(cleaner_options)

assert cleaner.clean(text) == [
['call', 'me', 'ishmael'],
['some', 'year', 'ago', 'never', 'mind', 'how', 'long', 'precisely', 'have', 'little', 'or', 'no', 'money', 'in', 'my', 'purse', 'and', 'nothing', 'particular',
    'to', 'interest', 'me', 'on', 'shore', 'i', 'think', 'i', 'would', 'sail', 'about', 'a', 'little', 'and', 'see', 'the', 'watery', 'part', 'of', 'the', 'world'],
['it', 'be', 'a', 'way', 'i', 'have', 'of', 'drive', 'off',
    'the', 'spleen', 'and', 'regulate', 'the', 'circulation'],
]

Notes

  • Please note you might have to manually quit and reattempt to run the program the first time you run it if it gets stuck after downloading the contractions dataset.
  • Wordnet is used to lemmatize based on the parts of speech given by Penn Bank. Since Wordnet is limited in the number of options (eg. no pronouns), some words will not be processed. This is done to preserve the root word. For instance, "us" Wordnet will convert "us" to "u". In order to avoid this, "us" will not be passed into the lemmatizer.
  • You may need to run the following if wordnet is not found
python3
>> import nltk
>> nltk.download('wordnet')

Change log

  • 1.0.0: Initial release
  • 1.0.1: Corrected handling of sentences without punctuation and brackets

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aiotext-1.0.1.tar.gz (7.7 kB view hashes)

Uploaded source

Built Distribution

aiotext-1.0.1-py2.py3-none-any.whl (9.7 kB view hashes)

Uploaded py2 py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page