Skip to main content

A python module to generate word embeddings from tiny data

Project description

GitHub release PyPI release Build MIT License

nonce2vec

Welcome to Nonce2Vec!

The main branch of this repository now refers to the Kabbach et al. (2019) ACL SRW 2019 paper Towards incremental learning of word embeddings using context informativeness.

If you are looking for the Herbelot and Baroni (2017) repository, check out the emnlp2017 branch.

If you use this code, please cite:

@inproceedings{kabbach-etal-2019-towards,
    title = "Towards Incremental Learning of Word Embeddings Using Context Informativeness",
    author = "Kabbach, Alexandre  and
      Gulordava, Kristina  and
      Herbelot, Aur{\'e}lie",
    booktitle = "Proceedings of the 57th Conference of the Association for Computational Linguistics: Student Research Workshop",
    month = jul,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/P19-2022",
    pages = "162--168"
}

Abstract

In this paper, we investigate the task of learning word embeddings from very sparse data in an incremental, cognitively-plausible way. We focus on the notion of informativeness, that is, the idea that some content is more valuable to the learning process than other. We further highlight the challenges of online learning and argue that previous systems fall short of implementing incrementality. Concretely, we incorporate informativeness in a previously proposed model of nonce learning, using it for context selection and learning rate modulation. We test our system on the task of learning new words from definitions, as well as on the task of learning new words from potentially uninformative contexts. We demonstrate that informativeness is crucial to obtaining state-of-the-art performance in a truly incremental setup.

A note on the code

We have significantly refactored the original Nonce2Vec code in order to make replication easier and to make it work with gensim v3.x. You can use Nonce2Vec v2.x to replicate the results of the SRW paper. However, to replicate results of the original ENMLP paper, refer to Nonce2Vec v1.x found under the emnlp2017 branch as we cannot guarantee fair replication between v1.x and v2.x.

Install

You can install Nonce2Vec via pip:

pip3 install nonce2vec

or, after a git clone, via:

python3 setup.py install

Pre-requisites

To run Nonce2Vec, you need two gensim Word2Vec models (a skipgram model and a cbow model to compute informativeness-metrics). You can download the skipgram model from:

wget backup.3azouz.net/gensim.w2v.skipgram.model.7z

and the cbow model from:

wget backup.3azouz.net/gensim.w2v.cbow.model.7z

or generate both yourself following the instructions below.

Generating a Word2Vec model from a Wikipedia dump

You can download our English Wikipedia dump of January 2019 here:

wget backup.3azouz.net/enwiki.20190120.7z

If you want to generate a completely new (tokenized-one-sentence-per-line) dump of Wikipedia, for English or any other language, check out WiToKit.

Once you have a Wikipedia txt dump, you can generate a gensim Word2Vec skipgram model via:

n2v train \
  --data /absolute/path/to/wikipedia/tokenized/text/dump \
  --outputdir /absolute/path/to/dir/where/to/store/w2v/model \
  --alpha 0.025 \
  --neg 5 \
  --window 5 \
  --sample 1e-3 \
  --epochs 5 \
  --min-count 50 \
  --size 400 \
  --num-threads number_of_cpu_threads_to_use \
  --train-mode skipgram

and a gensim Word2Vec cbow model via:

n2v train \
  --data /absolute/path/to/wikipedia/tokenized/text/dump \
  --outputdir /absolute/path/to/dir/where/to/store/w2v/model \
  --alpha 0.025 \
  --neg 5 \
  --window 5 \
  --sample 1e-3 \
  --epochs 5 \
  --min-count 50 \
  --size 400 \
  --num-threads number_of_cpu_threads_to_use \
  --train-mode cbow

To check the correlation of your word2vec model(s) with the MEN dataset, run:

n2v check-men \
  ---model /absolute/path/to/gensim/w2v/model

Running the code

Running Nonce2Vec on the definitional of chimeras datasets is done via the n2v test command. You can pass in the --reload parameter to run in one-shot mode, without it the code runs in incremental model by default. You can further pass in the --shuffle parameter to shuffle the test set before running n2v.

You will find below a list of commands corresponding to the experiments reported in the SRW 2019 paper. For example, to test the SUM CWI model (a basic sum model with context-word-informativeness-based filtering), which provides a rather robust baseline on all datasets in incremental setup, run, for the definitional dataset:

n2v test \
  --on def \
  --model /absolute/path/to/gensim/w2v/skipgram/model \
  --info-model /absolute/path/to/gensim/w2v/cbow/model \
  --sum-only \
  --sum-filter cwi \
  --sum-threshold 0

To run the N2V CWI alpha model on the chimera L4 test set, with shuffling and in one-shot evaluation setup (which provides SOTA performance), do:

n2v test \
  --on l4 \
  --model /absolute/path/to/gensim/w2v/skipgram/model \
  --info-model /absolute/path/to/gensim/w2v/cbow/model \
  --sum-filter cwi \
  --sum-threshold 0 \
  --train-with cwi_alpha \
  --alpha 1.0 \
  --beta 1000 \
  --kappa 1 \
  --neg 3 \
  --epochs 1 \
  --reload

To test N2V as-is (the original N2V code without background freezing), in incremental setup on the definitional dataset, do:

n2v test \
  --on def \
  --model /absolute/path/to/gensim/w2v/skipgram/model \
  --sum-filter random \
  --sample 10000 \
  --alpha 1.0 \
  --neg 3 \
  --window 15 \
  --epochs 1 \
  --lambda 70 \
  --sample-decay 1.9 \
  --window-decay 5 \
  --replication

To test N2V CWI init (the original N2V with CWI-based sum initialization) on the definitional dataset in one-shot evaluation setup, do:

n2v test \
  --on def \
  --model /absolute/path/to/gensim/w2v/skipgram/model \
  --info-model /absolute/path/to/gensim/w2v/cbow/model \
  --sum-filter cwi \
  --sum-threshold 0 \
  --alpha 1.0 \
  --neg 3 \
  --window 15 \
  --epochs 1 \
  --lambda 70 \
  --sample-decay 1.9 \
  --window-decay 5 \
  --replication \
  --reload

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nonce2vec-2.0.2.tar.gz (22.5 kB view details)

Uploaded Source

File details

Details for the file nonce2vec-2.0.2.tar.gz.

File metadata

  • Download URL: nonce2vec-2.0.2.tar.gz
  • Upload date:
  • Size: 22.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.24.0 setuptools/46.0.0 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.7.9

File hashes

Hashes for nonce2vec-2.0.2.tar.gz
Algorithm Hash digest
SHA256 ce0f642ac4f8c8fa315456e946247009b353d0490dcac7d8c794c2edd38441a5
MD5 33fdda0faeeb8d851d736d38d321e0ba
BLAKE2b-256 1d363a2e76d33c97772ee88b46d673ba52a4e24c30b406e37350eb673048436b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page