Skip to main content
This is a pre-production deployment of Warehouse. Changes made here affect the production instance of PyPI (
Help us improve Python packaging - Donate today!

Word and sentence tokenization.

Project Description


Word and sentence tokenization in Python.

[![PyPI version](](
[![Build Status](](
![Jonathan Raiman, author](



Use this package to split up strings according to sentence and word boundaries.
For instance, to simply break up strings into tokens:

tokenize("Joey was a great sailor.")
#=> ["Joey ", "was ", "a ", "great ", "sailor ", "."]

To also detect sentence boundaries:

sent_tokenize("Cat sat mat. Cat's named Cool.", keep_whitespace=True)
#=> [["Cat ", "sat ", "mat", ". "], ["Cat ", "'s ", "named ", "Cool", "."]]

`sent_tokenize` can keep the whitespace as-is with the flags `keep_whitespace=True` and `normalize_ascii=False`.


pip3 install ciseau


Run `nose2`.

If you find this project useful for your work or research, here's how you can cite it:

author = {Raiman, Jonathan},
title = {Ciseau},
year = {2017},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{}},
commit = {fe88b9d7f131b88bcdd2ff361df60b6d1cc64c04}

Release History

This version
History Node


History Node


Download Files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, Size & Hash SHA256 Hash Help File Type Python Version Upload Date
(10.3 kB) Copy SHA256 Hash SHA256
Source None Jan 11, 2018

Supported By

Elastic Elastic Search Pingdom Pingdom Monitoring Dyn Dyn DNS Sentry Sentry Error Logging CloudAMQP CloudAMQP RabbitMQ Heroku Heroku PaaS Kabu Creative Kabu Creative UX & Design Fastly Fastly CDN DigiCert DigiCert EV Certificate Google Google Cloud Servers