Unsupervised Word Segmentation for Neural Machine Translation and Text Generation
Project description
Subword Neural Machine Translation
This repository contains preprocessing scripts to segment text into subword units. The primary purpose is to facilitate the reproduction of our experiments on Neural Machine Translation with subword units (see below for reference).
INSTALLATION
install via pip (from PyPI):
pip install subword-nmt
install via pip (from Github):
pip install https://github.com/rsennrich/subword-nmt/archive/master.zip
alternatively, clone this repository; the scripts are executable stand-alone.
USAGE INSTRUCTIONS
Check the individual files for usage instructions.
To apply byte pair encoding to word segmentation, invoke these commands:
subword-nmt learn-bpe -s {num_operations} < {train_file} > {codes_file}
subword-nmt apply-bpe -c {codes_file} < {test_file} > {out_file}
To segment rare words into character n-grams, do the following:
subword-nmt get-vocab --train_file {train_file} --vocab_file {vocab_file}
subword-nmt segment-char-ngrams --vocab {vocab_file} -n {order} --shortlist {size} < {test_file} > {out_file}
The original segmentation can be restored with a simple replacement:
sed -r 's/(@@ )|(@@ ?$)//g'
If you cloned the repository and did not install a package, you can also run the individual commands as scripts:
./subword_nmt/learn_bpe.py -s {num_operations} < {train_file} > {codes_file}
BEST PRACTICE ADVICE FOR BYTE PAIR ENCODING IN NMT
We found that for languages that share an alphabet, learning BPE on the concatenation of the (two or more) involved languages increases the consistency of segmentation, and reduces the problem of inserting/deleting characters when copying/transliterating names.
However, this introduces undesirable edge cases in that a word may be segmented
in a way that has only been observed in the other language, and is thus unknown
at test time. To prevent this, apply_bpe.py
accepts a --vocabulary
and a
--vocabulary-threshold
option so that the script will only produce symbols
which also appear in the vocabulary (with at least some frequency).
To use this functionality, we recommend the following recipe (assuming L1 and L2 are the two languages):
Learn byte pair encoding on the concatenation of the training text, and get resulting vocabulary for each:
cat {train_file}.L1 {train_file}.L2 | subword-nmt learn-bpe -s {num_operations} -o {codes_file}
subword-nmt apply-bpe -c {codes_file} < {train_file}.L1 | subword-nmt get-vocab > {vocab_file}.L1
subword-nmt apply-bpe -c {codes_file} < {train_file}.L2 | subword-nmt get-vocab > {vocab_file}.L2
more conventiently, you can do the same with with this command:
subword-nmt learn-joint-bpe-and-vocab --input {train_file}.L1 {train_file}.L2 -s {num_operations} -o {codes_file} --write-vocabulary {vocab_file}.L1 {vocab_file}.L2
re-apply byte pair encoding with vocabulary filter:
subword-nmt apply-bpe -c {codes_file} --vocabulary {vocab_file}.L1 --vocabulary-threshold 50 < {train_file}.L1 > {train_file}.BPE.L1
subword-nmt apply-bpe -c {codes_file} --vocabulary {vocab_file}.L2 --vocabulary-threshold 50 < {train_file}.L2 > {train_file}.BPE.L2
as a last step, extract the vocabulary to be used by the neural network. Example with Nematus:
nematus/data/build_dictionary.py {train_file}.BPE.L1 {train_file}.BPE.L2
[you may want to take the union of all vocabularies to support multilingual systems]
for test/dev data, re-use the same options for consistency:
subword-nmt apply-bpe -c {codes_file} --vocabulary {vocab_file}.L1 --vocabulary-threshold 50 < {test_file}.L1 > {test_file}.BPE.L1
ADVANCED FEATURES
On top of the basic BPE implementation, this repository supports:
-
BPE dropout (Provilkov, Emelianenko and Voita, 2019): https://arxiv.org/abs/1910.13267 use the argument
--dropout 0.1
forsubword-nmt apply-bpe
to randomly drop out possible merges. Doing this on the training corpus can improve quality of the final system; at test time, use BPE without dropout -
support for glossaries: use the argument
--glossaries
forsubword-nmt apply-bpe
to provide a list of words and/or regular expressions that should always be passed to the output without subword segmentation
PUBLICATIONS
The segmentation methods are described in:
Rico Sennrich, Barry Haddow and Alexandra Birch (2016): Neural Machine Translation of Rare Words with Subword Units Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Berlin, Germany.
HOW IMPLEMENTATION DIFFERS FROM Sennrich et al. (2016)
This repository implements the subword segmentation as described in Sennrich et al. (2016), but since version 0.2, there is one core difference related to end-of-word tokens.
In Sennrich et al. (2016), the end-of-word token </w>
is initially represented as a separate token, which can be merged with other subwords over time:
u n d </w>
f u n d </w>
Since 0.2, end-of-word tokens are initially concatenated with the word-final character:
u n d</w>
f u n d</w>
The new representation ensures that when BPE codes are learned from the above examples and then applied to new text, it is clear that a subword unit und
is unambiguously word-final, and un
is unambiguously word-internal, preventing the production of up to two different subword units from each BPE merge operation.
apply_bpe.py
is backward-compatible and continues to accept old-style BPE files. New-style BPE files are identified by having the following first line: #version: 0.2
ACKNOWLEDGMENTS
This project has received funding from Samsung Electronics Polska sp. z o.o. - Samsung R&D Institute Poland, and from the European Union’s Horizon 2020 research and innovation programme under grant agreement 645452 (QT21).
CHANGELOG
v0.3.7:
- BPE dropout (Provilkov et al., 2019)
- more efficient glossaries (https://github.com/rsennrich/subword-nmt/pull/69)
v0.3.6:
- fix to subword-bpe command encoding
v0.3.5:
- fix to subword-bpe command under Python 2
- wider support of --total-symbols argument
v0.3.4:
- segment_tokens method to improve library usability (https://github.com/rsennrich/subword-nmt/pull/52)
- support regex glossaries (https://github.com/rsennrich/subword-nmt/pull/56)
- allow unicode separators (https://github.com/rsennrich/subword-nmt/pull/57)
- new option --total-symbols in learn-bpe (commit 61ad8)
- fix documentation (best practices) (https://github.com/rsennrich/subword-nmt/pull/60)
v0.3:
- library is now installable via pip
- fix occasional problems with UTF-8 whitespace and new lines in learn_bpe and apply_bpe.
- do not silently convert UTF-8 newline characters into "\n"
- do not silently convert UTF-8 whitespace characters into " "
- UTF-8 whitespace and newline characters are now considered part of a word, and segmented by BPE
v0.2:
- different, more consistent handling of end-of-word token (commit a749a7) (https://github.com/rsennrich/subword-nmt/issues/19)
- allow passing of vocabulary and frequency threshold to apply_bpe.py, preventing the production of OOV (or rare) subword units (commit a00db)
- made learn_bpe.py deterministic (commit 4c54e)
- various changes to make handling of UTF more consistent between Python versions
- new command line arguments for apply_bpe.py:
- '--glossaries' to prevent given strings from being affected by BPE
- '--merges' to apply a subset of learned BPE operations
- new command line arguments for learn_bpe.py:
- '--dict-input': rather than raw text file, interpret input as a frequency dictionary (as created by get_vocab.py).
v0.1:
- consistent cross-version unicode handling
- all scripts are now deterministic
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file subword_nmt-0.3.7-py2.py3-none-any.whl
.
File metadata
- Download URL: subword_nmt-0.3.7-py2.py3-none-any.whl
- Upload date:
- Size: 26.8 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.21.0 setuptools/40.6.2 requests-toolbelt/0.8.0 tqdm/4.23.3 CPython/2.7.15+
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a2d92eed5dea55f2b1c9b21225a57b3ae7009ce8a1fa4d2e3f01ab11435c28c9 |
|
MD5 | f62c135c10fa44cf7842be1173bc9738 |
|
BLAKE2b-256 | 74606600a7bc09e7ab38bc53a48a20d8cae49b837f93f5842a41fe513a694912 |