Skip to main content

Word and sentence tokenization.

Project description


Use this package to split up strings according to sentence and word boundaries. For instance, to simply break up strings into tokens:

` tokenize("Joey was a great sailor.") #=> ["Joey ", "was ", "a ", "great ", "sailor ", "."] `

To also detect sentence boundaries:

` sent_tokenize("Cat sat mat. Cat's named Cool.", keep_whitespace=True) #=> [["Cat ", "sat ", "mat", ". "], ["Cat ", "'s ", "named ", "Cool", "."]] `

sent_tokenize can keep the whitespace as-is with the flags keep_whitespace=True and normalize_ascii=False.


` pip3 install xml_cleaner `


Run nose2.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for xml-cleaner, version 2.0.4
Filename, size File type Python version Upload date Hashes
Filename, size xml-cleaner-2.0.4.tar.gz (10.8 kB) File type Source Python version None Upload date Hashes View

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring DigiCert DigiCert EV certificate Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page