Skip to main content

Python Library to Construct Word Embeddings for Small Data

Project description

hyperhyper Build Status PyPI PyPI - Python Version

Python Library to Construct Word Embeddings for Small Data. Still work in progress.

Building upon the work by Omer Levy et al. for Hyperwords.

Why?

Nowadays, word embeddings are mostly associated with Word2vec or fastText. Those approaches focus on scenarios, where an abundance of data is available. But to make them work, you also need a lot of data. This is not always the case. There exists alternative methods based on counting word pairs and some math magic around matrix operations. They need less data. This Python library implements the approaches (somewhat) efficiently (but there is there is still room for improvement.)

hyperhyper is based on a paper from 2015. The authors, Omer Levy et al., published their research code as Hyperwods. I tried to the port their original software to Python 3 but I ended up re-writing large parts of it. So this library was born.

Limitations: With hyperhyper you will run into (memory) problems, if you need large vocabularies (set of possible words). It's fine if you have a vocabulary up until 50k. Word2vec and fastText especially solve this curse of dimensionality.

Installation

pip install hyperhyper

If you have an Intel CPU, it's recommended to use the MKL library for numpy. It can be challening to correctly set up MKL. A package by intel may help you.

conda install -c intel intelpython3_core
pip install hyperhyper

Verify wheter mkl_info is present:

>>> import numpy
>>> numpy.__config__.show()

Disable internal multithreading ability of MKL or OpenBLAS.

export OPENBLAS_NUM_THREADS=1
export MKL_NUM_THREADS=1

This speeds up computation because we are using multiprocessing on an outer loop.

Usage

import hyperhyper as hy

corpus = hy.Corpus.from_file('news.2010.en.shuffled')
bunch = hy.Bunch("news_bunch", corpus)
vectors, results = bunch.svd(keyed_vectors=True)

results['results'][1]
>>> {'name': 'en_ws353',
 'score': 0.6510955349164682,
 'oov': 0.014164305949008499,
 'fullscore': 0.641873218557878}

vectors.most_similar('berlin')
>>> [('vienna', 0.6323208808898926),
 ('frankfurt', 0.5965485572814941),
 ('munich', 0.5737138986587524),
 ('amsterdam', 0.5511572360992432),
 ('stockholm', 0.5423270463943481)]

See examples for more.

The general concepts:

  • Preprocess data once and save them in a bunch
  • Cache all results and also record their perfomance on test data
  • Make it easy to fine-tune paramters for you data

More documenation may be forthcoming. Until then you have to read the source code.

Scientific Background

This software is based on the following papers:

  • Improving Distributional Similarity with Lessons Learned from Word Embeddings, Omer Levy, Yoav Goldberg, Ido Dagan, TACL 2015. Paper Code

    Recent trends suggest that neural-network-inspired word embedding models outperform traditional count-based distributional models on word similarity and analogy detection tasks. We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Furthermore, we show that these modifications can be transferred to traditional distributional models, yielding similar gains. In contrast to prior reports, we observe mostly local or insignificant performance differences between the methods, with no global advantage to any single approach over the others.

  • The Influence of Down-Sampling Strategies on SVD Word Embedding Stability, Johannes Hellrich, Bernd Kampe, Udo Hahn, NAACL 2019. Paper Code Code

    The stability of word embedding algorithms, i.e., the consistency of the word representations they reveal when trained repeatedly on the same data set, has recently raised concerns. We here compare word embedding algorithms on three corpora of different sizes, and evaluate both their stability and accuracy. We find strong evidence that down-sampling strategies (used as part of their training procedures) are particularly influential for the stability of SVD-PPMI-type embeddings. This finding seems to explain diverging reports on their stability and lead us to a simple modification which provides superior stability as well as accuracy on par with skip-gram embedding

Development

  1. Install pipenv.
  2. git clone https://github.com/jfilter/hyperhyper && cd hyperhyper && pipenv install && pipenv shell
  3. python -m spacy download en_core_web_sm
  4. pytest tests

Contributing

If you have a question, found a bug or want to propose a new feature, have a look at the issues page.

Pull requests are especially welcomed when they fix bugs or improve the code quality.

Future Work / TODO

  • evaluation for analogies
  • replace pipenv if they still don't ship any newer release
  • implement counting in a more efficient programming language, e.g. Cython.

Why is this library named hyperhyper?

Scooter – Hyper Hyper (Song)

License

BSD-2-Clause.

Sponsoring

This work was created as part of a project that was funded by the German Federal Ministry of Education and Research.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hyperhyper-0.1.1.tar.gz (306.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hyperhyper-0.1.1-py3-none-any.whl (312.3 kB view details)

Uploaded Python 3

File details

Details for the file hyperhyper-0.1.1.tar.gz.

File metadata

  • Download URL: hyperhyper-0.1.1.tar.gz
  • Upload date:
  • Size: 306.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.7.3

File hashes

Hashes for hyperhyper-0.1.1.tar.gz
Algorithm Hash digest
SHA256 2b543a40dd8261d04d62499ccfcb6cc7b258f8ab641874ca8d4fc27b5ae56a03
MD5 befd9c621695559d79b1d0f618dcbbc4
BLAKE2b-256 83e118eb986a36d09846ebd74b955557ce02907ff6e18c8f5d3c06d876754182

See more details on using hashes here.

File details

Details for the file hyperhyper-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: hyperhyper-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 312.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.7.3

File hashes

Hashes for hyperhyper-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7a59c503bc923ebbbc85e08e85d44bce9be8a8d990a38d9b9bd422411cd2b7d6
MD5 9b4959d8c684a66c92dcc1108a6806b6
BLAKE2b-256 286a803aa3ef2491849c776a45283a3bedfcad289cde7b5820e01bcad278e0dd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page