Skip to main content

Multilingual Language Modeling Toolkit

Project description

langdist is a Python project for experimenting Character-level Multilingual Language Modeling, which is to see how learning a character-level language model in one language helps learning another character-level language model in a different language. The project is still under development and can offer limited functionality.

Features

  • Download and preprocess multilingual parallel corpora ([Multilingual Bible Parallel Corpus](http://christos-c.com/bible/))

  • Train a monolingual language model - This is a language model trained in one language

  • Train a bilingual language model - This is a language model that is trained on top of another language model (the parameters are initialized using another language model’s parameters)

  • Generate texts using a trained language model

Installation

  • This repository can run on Ubuntu 14.04 LTS & Mac OSX 10.x (not tested on other OSs)

  • Tested only on Python 3.5

langdist depends on [NumPy and Scipy](https://www.scipy.org/install.html), Python packages for scientific computing. You might need to have them installed prior to installing langdist.

You can install langdist by:

`pip install langdist`

This installs langdist package to your Python, as well as langdist command and add it to your PATH.

langdist also depends on tensorflow package. In default, it tries to install the CPU-only version of tensorflow. If you want to use GPU, you need to install tensorflow with GPU support by yourself. (C.f. [Installing Tensorflow](https://www.tensorflow.org/install/))

Usage

After installing, langdist –help will print help of how to use langdist command.

### 1. Download and preprocess a corpus

langdist implements a command to download and preprocess a corpus from [Multilingual Bible Parallel Corpus](http://christos-c.com/bible/). The following command will download an English corpus and save it to ./en_corpus.pkl.

`langdist download-bible en en_corpus.pkl`

Note that en here is the language code of English. Specifying an invalid language code will raise an error message that shows the valid language codes.

### 2. Fit an encoder on the characters used in corpora

You need to fit an encoder to the character used in corpora before you train a language model on them. Note that the same encoder will be used when you train a new language model on top of another language model (multilingual language model). Therefore, you need to fit an encoder to all the corpora you will train multilingual language models on.

The following command will fit an encoder to English, French, and Japanese corpora and save it to ./en_fr_ja_encoder.pkl:

`langdist fit-encoder en_fr_ja_encoder.pkl en_corpus.pkl fr_corpus.pkl ja_corpus.pkl`

Note that xx_corpus.pkl is a pickle file of a corpus, which can be generated by langdist download-bible command. You can also create a list of texts by yourself and save it to a pickle file. (Each element of the list would correspond to a segment such as sentence, paragraph, article, etc. depending on your purpose.)

### 3. Train a language model from the scratch (monolingual language model)

The following command will train a French language model and save it to ./fr_model directory:

`langdist train fr_corpus.pkl en_fr_ja_encoder.pkl fr_model --patience=819200 --logpath=fr.log`

Note that using an encoder that was not fit to the corpus will throw an exception. –patience option specifies how many iterations you want to keep training and –logpath option specifies the path to the log file that records the progress of the training (no log file will be created if you don’t specify the option).

During the training, various stats are dumped to path_to_model_dir/tensorboard.log directory. You can visualize them using tensorboard by tensorboard –logdir=path_to_model_dir/tensorboard.log. The model is saved every time after computing validation perplexity and is available to use before finishing the training.

Check the output of langdist –help to know what other options are available for training a language model.

### 4. Train a new language model on of another language model (multilingual language model)

The following command will train an English language model on top of the French language model we have trained and save it to fr2en_model directory:

`langdist retrain fr_model en_corpus.pkl fr2en_model --patience=819200 --logpath=langdist.log`

Note that you don’t have to specify the path to an encoder because the model in fr_model includes it. If the encoder that was used when training fr_model was not fit to characters in en_corpus.pkl, it will throw an exception.

During the training, various stats are dumped to path_to_model_dir/tensorboard.log directory. You can visualize them using tensorboard by tensorboard –logdir=path_to_model_dir/tensorboard.log. The model is saved every time after computing validation perplexity and is available to use before finishing the training.

Check the output of langdist –help to know what other options are available for training a language model.

### 5. Generate texts using a trained language model

Once you have trained a language model, the following command will generate texts using the trained language model:

`langdist generate fr2en_model --sample-num=50`

–sample-num option decides the number of texts to generate. Note that each text is independently generated (sampled) by the language model.

Check the output of langdist –help to know what other options are available for training a language model.

### Use langdist from Python

langdist can be used as a normal python package by importing langdist package, which is installed to your Python environment by pip install langdist. Reading langdist/cli.py is a good way to figure out how to use the package.

TODO: Add a link to the blog post Bilingual Character-level Neural Language Modeling

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

langdist-0.4.1-py3-none-any.whl (23.7 kB view details)

Uploaded Python 3

File details

Details for the file langdist-0.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for langdist-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 81af8cbeecc422a9eae24da4a0c988542090f05e469d7cd3f3ef265b30c22047
MD5 f89d3e3f31b4c0e6001a79ea376bba8b
BLAKE2b-256 c3079407cdf0459d23e2f368017774b3cf3f85545c7622fa8b48bc63295b56c5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page