Skip to main content

A comprehensive text cleaning and preprocessing pipeline.

Project description

SqueakyCleanText

PyPI PyPI - Downloads

In the world of machine learning and natural language processing, clean and well-structured text data is crucial for building effective downstream models and managing token limits in language models.

SqueakyCleanText helps achieve this by addressing common text issues by doing most of the work for you.

Key Features

  • Encoding Issues: Corrects text encoding problems.
  • HTML and URLs: Removes unnecessary long HTML tags and URLs, or replace them with special tokens.
  • Contact Information: Strips emails, phone numbers, and other contact details, or replace them with special tokens.
  • Isolated Characters: Eliminates isolated letters or symbols that adds no value.
  • NER Support: Uses a soft voting ensemble technique to handle named entities like location, person and organisation names, which can be replaced with special tokens if not needed in the text.
  • Stopwords and Punctuation: For statistical models, it optimizes text by removing stopwords, special symbols, and punctuation.
  • Currency Symbols: Replaces all currency symbols with their alphabetical equivalents.
  • Whitespace Normalization: Removes unnecessary whitespace.
  • Detects the language of processed text if needed in downstream task.
  • Supports English, Dutch, German and Spanish language.
  • Provides text for both Lnaguage model processing and Statistical model processing.
Benefits for Statistical Models

When working with statistical models, further optimization is often required, such as removing stopwords, special symbols, and punctuation. SqueakyCleanText offers functionality to streamline this process, ensuring that your text data is in optimal shape for classification and other downstream tasks.

Advantage for ensemble NER process

Depending on sigle model for Name Entity recognition is not be ideal, as there is a high chance it might skip the entity all together. Also combining the language specific NER model makes it more specific for text and reduces the chance of missing out the entity. The package NER model has the chunking mechanism which helps to do the NER process even if the text is longer than the model token size.

By automating these text cleaning steps, SqueakyCleanText ensures your data is prepared efficiently and effectively, saving time and improving model performance.

Installation

To install SqueakyCleanText, use the following pip command:

pip install SqueakyCleanText

Usage

Few examples, how to use the SqueakyCleanText package:

Examples:

english_text = "Hey John Doe, wanna grab some coffee at Starbucks on 5th Avenue? I'm feeling a bit tired after last night's party at Jane's place. BTW, I can't make it to the meeting at 10:00 AM. LOL! Call me at +1-555-123-4567 or email me at john.doe@example.com. Check out this cool website: https://www.example.com."

dutch_text = "Hé Jan Jansen, wil je wat koffie halen bij Starbucks op de 5e Avenue? Ik voel me een beetje moe na het feest van gisteravond bij Annes huis. Btw, ik kan niet naar de vergadering om 10:00 uur. LOL! Bel me op +31-6-1234-5678 of mail me op jan.jansen@voorbeeld.com. Kijk eens naar deze coole website: https://www.voorbeeld.com."
  • Uisng in it's default config settings:
# first time import will take bit of time, so please have patience
from sct import sct

# Initialize the TextCleaner
sx = sct.TextCleaner()

# Process the text
#lmtext : Text for Language Models;
# cmtext : Text for Classical/Statistical ML;
# language : Processed text language

#### --- English Text
lmtext, cmtext, language = sx.process(english_text)
print(f"Language Model Text : {lmtext}")
print(f"Statistical Model Text : {cmtext}")
print(f"Language of the Text : {language}")

# Output the result
# Language Model Text : Hey <PERSON> wanna grab some coffee at Starbucks on <LOCATION> I'm feeling a bit tired after last night's party at <PERSON>'s place. BTW, can't make it to the meeting at <NUMBER><NUMBER> AM. LOL! Call me at <PHONE> or email me at <EMAIL> Check out this cool website: <URL>
# Statistical Model Text : hey person wanna grab coffee starbucks location im feeling bit tired last nights party persons place btw cant make meeting numbernumber am lol call phone email email check cool website url
# Language of the Text : ENGLISH

#### --- Dutch Text
lmtext, cmtext, language = sx.process(dutch_text)
print(f"Language Model Text : {lmtext}")
print(f"Statistical Model Text : {cmtext}")
print(f"Language of the Text : {language}")

# Output the result
# Language Model Text : He <PERSON> wil je wat koffie halen bij <ORGANISATION> op de <LOCATION> Ik voel me een beetje moe na het feest van gisteravond bij Annes huis. Btw, ik kan niet naar de vergadering om <NUMBER><NUMBER> uur. LOL! Bel me op <NUMBER><NUMBER><PHONE> of mail me op <EMAIL> Kijk eens naar deze coole website: <URL>
# Statistical Model Text : he person koffie halen organisation location voel beetje moe feest gisteravond annes huis btw vergadering numbernumber uur lol bel numbernumberphone mail email kijk coole website url
# Language of the Text : DUTCH
  • Uisng the package any of the functionality, lets take NER as an example
from sct import sct, config

config.CHECK_NER_PROCESS = False
sx = sct.TextCleaner()

lmtext, cmtext, language = sx.process(english_text)
print(f"Language Model Text : {lmtext}")
print(f"Statistical Model Text : {cmtext}")
print(f"Language of the Text : {language}")

# Output the result
Language Model Text : Hey John Doe, wanna grab some coffee at Starbucks on 5th Avenue? I'm feeling a bit tired after last night's party at Jane's place. BTW, can't make it to the meeting at <NUMBER><NUMBER> AM. LOL! Call me at <PHONE> or email me at <EMAIL> Check out this cool website: <URL>
Statistical Model Text : hey john doe wanna grab coffee starbucks 5th avenue im feeling bit tired last nights party janes place btw cant make meeting numbernumber am lol call phone email email check cool website url
Language of the Text : ENGLISH

API

sct.TextCleaner

process(text: str) -> Tuple[str, str, str]

Processes the input text and returns a tuple containing:

  • Cleaned text with punctuation and unnecessary characters removed.
  • Cleaned text with stopwords removed.
  • Detected language of the text.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request or open an issue.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

The package took inspirations from the following repo:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

SqueakyCleanText-0.2.2.tar.gz (15.7 kB view details)

Uploaded Source

Built Distribution

SqueakyCleanText-0.2.2-py3-none-any.whl (19.1 kB view details)

Uploaded Python 3

File details

Details for the file SqueakyCleanText-0.2.2.tar.gz.

File metadata

  • Download URL: SqueakyCleanText-0.2.2.tar.gz
  • Upload date:
  • Size: 15.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.14

File hashes

Hashes for SqueakyCleanText-0.2.2.tar.gz
Algorithm Hash digest
SHA256 44b7770509c92d440dd024403dac04266ca583191e26be99da709cde69925e64
MD5 96727c6e9a98258c25c8c8b830f7d40b
BLAKE2b-256 f4d3a510b98d71296ae8994a8d72cf97787b8ac561c6a95ebcaa2c97d98256b2

See more details on using hashes here.

File details

Details for the file SqueakyCleanText-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for SqueakyCleanText-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 b71b7f7787dc40881a7dbbb9fc9a674b1ee2b8e72a2077881e7865441a082ba1
MD5 1ab675d4bbc3a424a2f516779524c737
BLAKE2b-256 02fe5e9b0a2d3a2e3e4ebfe645a071f77c9c2eec349d10ae4bfb8d602236fe25

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page