Neural machine translation and sequence learning using TensorFlow

# OpenNMT-tf

OpenNMT-tf is a general purpose sequence learning toolkit using TensorFlow 2. While neural machine translation is the main target task, it has been designed to more generally support:

• sequence to sequence mapping
• sequence tagging
• sequence classification
• language modeling

The project is production-oriented and comes with backward compatibility guarantees.

## Key features

### Modular model architecture

Models are described with code to allow training custom architectures and overriding default behavior. For example, the following instance defines a sequence to sequence model with 2 concatenated input features, a self-attentional encoder, and an attentional RNN decoder sharing its input and output embeddings:

opennmt.models.SequenceToSequence(
source_inputter=opennmt.inputters.ParallelInputter(
[
opennmt.inputters.WordEmbedder(embedding_size=256),
opennmt.inputters.WordEmbedder(embedding_size=256),
],
reducer=opennmt.layers.ConcatReducer(axis=-1),
),
target_inputter=opennmt.inputters.WordEmbedder(embedding_size=512),
encoder=opennmt.encoders.SelfAttentionEncoder(num_layers=6),
decoder=opennmt.decoders.AttentionalRNNDecoder(
num_layers=4,
num_units=512,
attention_mechanism_class=tfa.seq2seq.LuongAttention,
),
share_embeddings=opennmt.models.EmbeddingsSharingLevel.TARGET,
)


The opennmt package exposes other building blocks that can be used to design:

Standard models such as the Transformer are defined in a model catalog and can be used without additional configuration.

### Full TensorFlow 2 integration

OpenNMT-tf is fully integrated in the TensorFlow 2 ecosystem:

### Compatibility with CTranslate2

CTranslate2 is an optimized inference engine for OpenNMT models featuring fast CPU and GPU execution, model quantization, parallel translations, dynamic memory usage, interactive decoding, and more! OpenNMT-tf can automatically export models to be used in CTranslate2.

### Dynamic data pipeline

OpenNMT-tf does not require to compile the data before the training. Instead, it can directly read text files and preprocess the data when needed by the training. This allows on-the-fly tokenization and data augmentation by injecting random noise.

### Model fine-tuning

OpenNMT-tf supports model fine-tuning workflows:

• Model weights can be transferred to new word vocabularies, e.g. to inject domain terminology before fine-tuning on in-domain data
• Contrastive learning to reduce word omission errors

### Source-target alignment

Sequence to sequence models can be trained with guided alignment and alignment information are returned as part of the translation API.

OpenNMT-tf also implements most of the techniques commonly used to train and evaluate sequence models, such as:

• automatic evaluation during the training
• multiple decoding strategy: greedy search, beam search, random sampling
• N-best rescoring
• scheduled sampling
• checkpoint averaging
• ... and more!

See the documentation to learn how to use these features.

## Usage

OpenNMT-tf requires:

• Python 3.7 or above
• TensorFlow 2.6, 2.7, 2.8, or 2.9

We recommend installing it with pip:

pip install --upgrade pip
pip install OpenNMT-tf


### Command line

OpenNMT-tf comes with several command line utilities to prepare data, train, and evaluate models.

For all tasks involving a model execution, OpenNMT-tf uses a unique entrypoint: onmt-main. A typical OpenNMT-tf run consists of 3 elements:

• the model type
• the parameters described in a YAML file
• the run type such as train, eval, infer, export, score, average_checkpoints, or update_vocab

that are passed to the main script:

onmt-main --model_type <model> --config <config_file.yml> --auto_config <run_type> <run_options>


### Library

OpenNMT-tf also exposes well-defined and stable APIs, from high-level training utilities to low-level model layers and dataset transformations.

For example, the Runner class can be used to train and evaluate models with few lines of code:

import opennmt

config = {
"model_dir": "/data/wmt-ende/checkpoints/",
"data": {
"source_vocabulary": "/data/wmt-ende/joint-vocab.txt",
"target_vocabulary": "/data/wmt-ende/joint-vocab.txt",
"train_features_file": "/data/wmt-ende/train.en",
"train_labels_file": "/data/wmt-ende/train.de",
"eval_features_file": "/data/wmt-ende/valid.en",
"eval_labels_file": "/data/wmt-ende/valid.de",
}
}

model = opennmt.models.TransformerBase()
runner = opennmt.Runner(model, config, auto_config=True)
runner.train(num_devices=2, with_eval=True)


Here is another example using OpenNMT-tf to run efficient beam search with a self-attentional decoder:

decoder = opennmt.decoders.SelfAttentionDecoder(num_layers=6, vocab_size=32000)

initial_state = decoder.initial_state(
memory=memory, memory_sequence_length=memory_sequence_length
)

batch_size = tf.shape(memory)[0]
start_ids = tf.fill([batch_size], opennmt.START_OF_SENTENCE_ID)

decoding_result = decoder.dynamic_decode(
target_embedding,
start_ids=start_ids,
initial_state=initial_state,
decoding_strategy=opennmt.utils.BeamSearch(4),
)


More examples using OpenNMT-tf as a library can be found online:

• The directory examples/library contains additional examples that use OpenNMT-tf as a library
• nmt-wizard-docker uses the high-level opennmt.Runner API to wrap OpenNMT-tf with a custom interface for training, translating, and serving

For a complete overview of the APIs, see the package documentation.

## Project details

Uploaded source
Uploaded py3