Skip to main content

Rule-based sentence tokenizer for Russian language

Project description

ru_sentence_tokenizer

A simple and fast rule-based sentence segmentation. Tested on OpenCorpora and SynTagRus datasets.

Installation

python3 -m pip install --index-url https://test.pypi.org/simple/ ru_sent_tokenize

Running

>>> from ru_sent_tokenize import ru_sent_tokenize
>>> ru_sent_tokenize('Эта шоколадка за 400р. ничего из себя не представляла. Артём решил больше не ходить в этот магазин')
['Эта шоколадка за 400р. ничего из себя не представляла.', 'Артём решил больше не ходить в этот магазин']

Metrics

The tokenizer has been tested on OpenCorpora and SynTagRus. There are two important metrics.

Precision. First one is we took single sentences from the datasets and measured how many times tokenizer didn't split them.

Recall. Second metric is we took two consecutive sentences from the datasets and joined each pair with a space characted. We measured how many times tokenizer correctly splitted a long sentence into two.

tokenizer OpenCorpora SynTagRus
Precision Recall Execution Time (sec) Precision Recall Execution Time (sec)
nltk.sent_tokenize 94.30 86.06 8.67 98.15 94.95 5.07
nltk.sent_tokenize(x, language='russian') 95.53 88.37 8.54 98.44 95.45 5.68
bureaucratic-labs.segmentator.split 97.16 88.62 359 96.79 92.55 210
ru_sent_tokenize 98.73 93.45 4.92 99.81 98.59 2.87

Notebook shows how the table above was calculated

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rusenttokenize-0.0.4.tar.gz (5.8 kB view hashes)

Uploaded Source

Built Distribution

rusenttokenize-0.0.4-py3-none-any.whl (10.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page