Adds punctuation to a block of text.
Project description
Punctuator
This is a fork of Ottokar Tilk's punctuator2 cleaned up into a formal Python3 package with testing.
A bidirectional recurrent neural network model with attention mechanism for restoring missing inter-word punctuation in unsegmented text.
The model can be trained in two stages (second stage is optional):
- First stage is trained on punctuation annotated text. Here the model learns to restore puncutation based on textual features only.
- Optional second stage can be trained on punctuation and pause annotated text. In this stage the model learns to combine pause durations with textual features and adapts to the target domain. If pauses are omitted then only adaptation is performed. Second stage with pause durations can be used for example for restoring punctuation in automatic speech recognition system output.
Installation
To install:
virtualenv -p python3.7 .env
. .env/bin/activate
pip install punctuator
Additionally, you'll need a trained model. You can create your own following the instructions below, or you can use a pre-trained model from here.
Place these models in PUNCTUATOR_DATA_DIR
directory, which defaults to ~/.punctuator
.
For example, to download Demo-Europarl-EN.pcl
, activate your virtual environment and run:
. .env/bin/activate
mkdir -p ~/.punctuator
cd ~/.punctuator
gdown https://drive.google.com/uc?id=0B7BsN5f2F1fZd1Q0aXlrUDhDbnM
To download other model files, find the Google Drive id via the share link, and substitute that in the command above.
Usage
To use from the command line:
cat input.txt | python punctuator.py model.pcl output.txt
To use from Python:
from punctuator import Punctuator
p = Punctuator('model.pcl')
print(p.punctuate('some text'))
How well does it work?
- A working demo can be seen here: http://bark.phon.ioc.ee/punctuator
- You can try to compete with this model here: http://bark.phon.ioc.ee/punctuator/game
Remember that all the scores given below are on unsegmented text and we did not use prosodic features, so, among other things, the model has to detect sentence boundaries in addition to the boundary type (?QUESTIONMARK, .PERIOD or !EXCLAMATIONMARK) based entirely on textual features. The scores are computed on the test set.
Training speed with default settings, an optimal Theano installation and a modern GPU should be around 10000 words per second.
Pretrained models can be downloaded here (Demo + 2 models from the Interspeech paper).
English TED talks
Training set size: 2.1M words. First stage only. More details can be found in this paper. For comparison, our previous model got an overall F1-score of 50.8.
PUNCTUATION | PRECISION | RECALL | F-SCORE |
---|---|---|---|
,COMMA | 64.4 | 45.2 | 53.1 |
?QUESTIONMARK | 67.5 | 58.7 | 62.8 |
.PERIOD | 72.3 | 71.5 | 71.9 |
Overall | 68.9 | 58.1 | 63.1 |
English Europarl v7
Training set size: 40M words. First stage only. Details in ./example.
You can try to compete with this model here.
PUNCTUATION | PRECISION | RECALL | F-SCORE |
---|---|---|---|
?QUESTIONMARK | 77.7 | 73.2 | 75.4 |
!EXCLAMATIONMARK | 50.0 | 0.1 | 0.1 |
,COMMA | 68.9 | 72.0 | 70.4 |
-DASH | 55.9 | 8.8 | 15.2 |
:COLON | 60.9 | 23.8 | 34.2 |
;SEMICOLON | 44.7 | 1.1 | 2.2 |
.PERIOD | 84.7 | 84.1 | 84.4 |
Overall | 75.7 | 73.9 | 74.8 |
Requirements
- Python 2.7
- Numpy
- Theano
Requirements for data:
-
Cleaned text files for training and validation of the first phase model. Each punctuation symbol token must be surrounded by spaces.
Example:
to be ,COMMA or not to be ,COMMA that is the question .PERIOD
-
(Optional) Pause annotated text files for training and validation of the second phase model. These should be cleaned in the same way as the first phase data. Pause durations in seconds should be marked after each word with a special tag
<sil=0.200>
. Punctuation mark, if any, must come after the pause tag.Example:
to <sil=0.000> be <sil=0.100> ,COMMA or <sil=0.000> not <sil=0.000> to <sil=0.000> be <sil=0.150> ,COMMA that <sil=0.000> is <sil=0.000> the <sil=0.000> question <sil=1.000> .PERIOD
Second phase data can also be without pause annotations to do just target domain adaptation.
Make sure that first words of sentences don't have capitalized first letters. This would give the model unfair hints about period locations. Also, the text files you use for training and validation must be large enough (at least minibatch_size x sequence_length of words, which is 128x50=6400 words with default settings), otherwise you might get an error.
Configuration
Vocabulary size, punctuation tokens and their mappings, and converted data location can be configured in the header of data.py. Some model hyperparameters can be configured in the headings of main.py and main2.py. Learning rate and hidden layer size can be passed as arguments.
Usage
First step is data conversion. Assuming that preprocessed and cleaned *.train.txt, *.dev.txt and *.test.txt files are located in <data_dir>
, the conversion can be initiated with:
python data.py <data_dir>
If you have second stage data as well, then:
python data.py <data_dir> <second_stage_data_dir>
The first stage can be trained with:
python main.py <model_name> <hidden_layer_size> <learning_rate>
e.g python main.py <model_name> 256 0.02
works well.
Second stage can be trained with:
python main2.py <model_name> <hidden_layer_size> <learning_rate> <first_stage_model_path>
Preprocessed text can be punctuated with e.g:
cat data.dev.txt | python punctuator.py <model_path> <model_output_path>
or, if pause annotations are present in data.dev.txt and you have a second stage model trained on pause annotated data, then:
cat data.dev.txt | python punctuator.py <model_path> <model_output_path> 1
Punctuation tokens in data.dev.txt don't have to be removed - the punctuator.py script ignores them.
Error statistics in this example can be computed with:
python error_calculator.py data.dev.txt <model_output_path>
You can play with a trained model with (assumes the input text is similarly preprocessed as the training data):
python play_with_model.py <model_path>
or with:
python play_with_model.py <model_path> 1
if you want to see, which words the model sees as UNKs (OOVs).
Development
Run all tests with:
export TESTNAME=; tox
Run a specific test in a specific environment with:
export TESTNAME=.test_punctuate; tox -e py37
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file punctuator-0.9.6.tar.gz
.
File metadata
- Download URL: punctuator-0.9.6.tar.gz
- Upload date:
- Size: 22.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 82d8f4693fa2b1e722cd5cf0d337176d0cc706b77d64d68760d2cd1b653a052e |
|
MD5 | 6b4438f71474b10c7ab46cfa08337c24 |
|
BLAKE2b-256 | 990f180596123582315cb72c805c605b0ab20942b3035725fca0525729474a7d |