Skip to main content
Help us improve Python packaging – donate today!

Word and sentence tokenization.

Project Description

Usage

Use this package to split up strings according to sentence and word boundaries. For instance, to simply break up strings into tokens:

` tokenize("Joey was a great sailor.") #=> ["Joey ", "was ", "a ", "great ", "sailor ", "."] `

To also detect sentence boundaries:

` sent_tokenize("Cat sat mat. Cat's named Cool.", keep_whitespace=True) #=> [["Cat ", "sat ", "mat", ". "], ["Cat ", "'s ", "named ", "Cool", "."]] `

sent_tokenize can keep the whitespace as-is with the flags keep_whitespace=True and normalize_ascii=False.

Installation

` pip3 install xml_cleaner `

Testing

Run nose2.

Release history Release notifications

This version
History Node

2.0.4

History Node

2.0.3

History Node

2.0.2

History Node

2.0.1

History Node

2.0.0

History Node

1.0.21

History Node

1.0.20

History Node

1.0.19

History Node

1.0.18

History Node

1.0.17

History Node

1.0.16

History Node

1.0.15

History Node

1.0.14

History Node

1.0.13

History Node

1.0.12

History Node

1.0.11

History Node

1.0.10

History Node

1.0.9

History Node

1.0.8

History Node

1.0.7

History Node

1.0.6

History Node

1.0.5

History Node

1.0.4

History Node

1.0.3

History Node

1.0.2

History Node

1.0.1

History Node

1.0.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
xml-cleaner-2.0.4.tar.gz (10.8 kB) Copy SHA256 hash SHA256 Source None Dec 29, 2016

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging CloudAMQP CloudAMQP RabbitMQ AWS AWS Cloud computing Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page