Skip to main content

Unsupervised Word Segmentation for Neural Machine Translation and Text Generation

Project description

Subword Neural Machine Translation

This repository contains preprocessing scripts to segment text into subword units. The primary purpose is to facilitate the reproduction of our experiments on Neural Machine Translation with subword units (see below for reference).


install via pip (from PyPI):

pip install subword-nmt

install via pip (from Github):

pip install

alternatively, clone this repository; the scripts are executable stand-alone.


Check the individual files for usage instructions.

To apply byte pair encoding to word segmentation, invoke these commands:

subword-nmt learn-bpe -s {num_operations} < {train_file} > {codes_file}
subword-nmt apply-bpe -c {codes_file} < {test_file} > {out_file}

To segment rare words into character n-grams, do the following:

subword-nmt get-vocab --train_file {train_file} --vocab_file {vocab_file}
subword-nmt segment-char-ngrams --vocab {vocab_file} -n {order} --shortlist {size} < {test_file} > {out_file}

The original segmentation can be restored with a simple replacement:

sed -r 's/(@@ )|(@@ ?$)//g'

If you cloned the repository and did not install a package, you can also run the individual commands as scripts:

./subword_nmt/ -s {num_operations} < {train_file} > {codes_file}


We found that for languages that share an alphabet, learning BPE on the concatenation of the (two or more) involved languages increases the consistency of segmentation, and reduces the problem of inserting/deleting characters when copying/transliterating names.

However, this introduces undesirable edge cases in that a word may be segmented in a way that has only been observed in the other language, and is thus unknown at test time. To prevent this, accepts a --vocabulary and a --vocabulary-threshold option so that the script will only produce symbols which also appear in the vocabulary (with at least some frequency).

To use this functionality, we recommend the following recipe (assuming L1 and L2 are the two languages):

Learn byte pair encoding on the concatenation of the training text, and get resulting vocabulary for each:

cat {train_file}.L1 {train_file}.L2 | subword-nmt learn-bpe -s {num_operations} -o {codes_file}
subword-nmt apply-bpe -c {codes_file} < {train_file}.L1 && subword-nmt get-vocab --train_file {codes_file} --vocab_file {vocab_file}.L1
subword-nmt apply-bpe -c {codes_file} < {train_file}.L2 && subword-nmt get-vocab --train_file {codes_file} --vocab_file {vocab_file}.L2

more conventiently, you can do the same with with this command:

subword-nmt learn-joint-bpe-and-vocab --input {train_file}.L1 {train_file}.L2 -s {num_operations} -o {codes_file} --write-vocabulary {vocab_file}.L1 {vocab_file}.L2

re-apply byte pair encoding with vocabulary filter:

subword-nmt apply-bpe -c {codes_file} --vocabulary {vocab_file}.L1 --vocabulary-threshold 50 < {train_file}.L1 > {train_file}.BPE.L1
subword-nmt apply-bpe -c {codes_file} --vocabulary {vocab_file}.L2 --vocabulary-threshold 50 < {train_file}.L2 > {train_file}.BPE.L2

as a last step, extract the vocabulary to be used by the neural network. Example with Nematus:

nematus/data/ {train_file}.BPE.L1 {train_file}.BPE.L2

[you may want to take the union of all vocabularies to support multilingual systems]

for test/dev data, re-use the same options for consistency:

subword-nmt apply-bpe -c {codes_file} --vocabulary {vocab_file}.L1 --vocabulary-threshold 50 < {test_file}.L1 > {test_file}.BPE.L1


The segmentation methods are described in:

Rico Sennrich, Barry Haddow and Alexandra Birch (2016): Neural Machine Translation of Rare Words with Subword Units Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Berlin, Germany.


This project has received funding from Samsung Electronics Polska sp. z o.o. - Samsung R&D Institute Poland, and from the European Union’s Horizon 2020 research and innovation programme under grant agreement 645452 (QT21).



  • library is now installable via pip
  • fix occasional problems with UTF-8 whitespace and new lines in learn_bpe and apply_bpe.
    • do not silently convert UTF-8 newline characters into "\n"
    • do not silently convert UTF-8 whitespace characters into " "
    • UTF-8 whitespace and newline characters are now considered part of a word, and segmented by BPE


  • different, more consistent handling of end-of-word token (commit a749a7) (
  • allow passing of vocabulary and frequency threshold to, preventing the production of OOV (or rare) subword units (commit a00db)
  • made deterministic (commit 4c54e)
  • various changes to make handling of UTF more consistent between Python versions
  • new command line arguments for
    • '--glossaries' to prevent given strings from being affected by BPE
    • '--merges' to apply a subset of learned BPE operations
  • new command line arguments for
    • '--dict-input': rather than raw text file, interpret input as a frequency dictionary (as created by


  • consistent cross-version unicode handling
  • all scripts are now deterministic

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

subword_nmt-0.3.3-py3.5.egg (48.8 kB view hashes)

Uploaded Source

subword_nmt-0.3.3-py2.py3-none-any.whl (25.0 kB view hashes)

Uploaded Python 2 Python 3

subword_nmt-0.3.3-py2.7.egg (53.7 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page