A biologically inspired method to create sparse, binary word vectors
Project description
FlyVec
Sparse Binary Word Embeddings Inspired by the Fruit Fly Brain
Code based on the ICLR 2021 paper Can a Fruit Fly Learn Word Embeddings?.
In this work we use a well-established neurobiological network motif from the mushroom body of the fruit fly brain to learn sparse binary word embeddings from raw unstructured text. This package allows the user to access pre-trained word embeddings and generate sparse binary hash codes for individual words.
Interactive demos of the learned concepts available at flyvec.org.
How to use
Install from Pip (recommended)
pip install flyvec
Installing from Source
After cloning:
conda env create -f environment-dev.yml
conda activate flyvec
pip install -e .
Basic Usage
An example below illustrates how one can access the binary word embedding for individual tokens for a default hash length k=50
.
import numpy as np
from flyvec import FlyVec
model = FlyVec.load()
embed_info = model.get_sparse_embedding("market"); embed_info
{'token': 'market',
'id': 1180,
'embedding': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1,
0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,
0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0,
1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0,
1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0], dtype=int8)}
Changing the Hash Length
The user can obtain the FlyVec embeddings for any hash length using the following example.
small_embed = model.get_sparse_embedding("market", 4); np.sum(small_embed['embedding'])
4
Handling "unknown" tokens
FlyVec uses a simple, word-based tokenizer. The provided model uses a vocabulary with about 20,000 words, all lower-cased, with special tokens for numbers (<NUM>
) and unknown words (<UNK>
). Unknown tokens have the token id of 0
, which can be used to filter unknown tokens.
unk_embed = model.get_sparse_embedding("DefNotAWord")
if unk_embed['id'] == 0:
print("I AM THE UNKNOWN TOKEN DON'T USE ME FOR ANYTHING IMPORTANT")
I AM THE UNKNOWN TOKEN DON'T USE ME FOR ANYTHING IMPORTANT
Batch generating word embeddings
Embeddings for individual words in a sentence can be obtained using this snippet.
sentence = "Supreme Court dismissed the criminal charges."
tokens = model.tokenize(sentence)
embedding_info = [model.get_sparse_embedding(t) for t in tokens]
embeddings = np.array([e['embedding'] for e in embedding_info])
print("TOKENS: ", [e['token'] for e in embedding_info])
print("EMBEDDINGS: ", embeddings)
TOKENS: ['supreme', 'court', 'dismissed', 'the', 'criminal', 'charges']
EMBEDDINGS: [[0 1 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 1 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 1 0]
[0 0 0 ... 0 1 0]]
FlyVec vocabulary
The vocabulary under the hood uses the gensim Dictionary
and can be accessed by either IDs (int
s) or Tokens (str
s).
# The tokens in the vocabulary
print(model.token_vocab[:5])
# The IDs that correspond to those tokens
print(model.vocab[:5])
# The dictionary object itself
model.dictionary;
['properties', 'a', 'among', 'and', 'any']
[2, 3, 4, 5, 6]
Simple word embeddings
Only care about the sparse, context independent word embeddings for our small vocabulary? Get precomputed word vectors at hash_length=51
below:
wget https://raw.githubusercontent.com/bhoov/flyvec/master/simple-flyvec-embeddings.json
Training
Please note that the training code is included, though code for processing the inputs.
Prerequisites
You need a python environment with numpy
installed, a system that supports CUDA, nvcc
, and g++
.
Building the Source Files
flyvec_compile
(Or, if using from source, you can also run make training
)
Note that you will see some warnings. This is expected.
Training
flyvec_train path/to/encodings.npy path/to/offsets.npy -o save/checkpoints/in/this/directory
Description of Inputs
encodings.npy
-- Annp.int32
array representing the tokenized vocabulary-IDs of the input corpus, of shape(N,)
whereN
is the number of tokens in the corpusoffsets.npy
-- Annp.uint64
array of shape(C,)
whereC
is the number of chunks in the corpus. Each each value represents the index that starts a new chunk withinencodings.npy
. (Chunks can be thought of as sentences or paragraphs within the corpus; boundaries over which the sliding window does not cross.)
Description of Outputs
model_X.npy
-- Stores checkpoints after every epoch within the specified output directory
See flyvec_train --help
for more options.
Debugging tips
BadZipFile
You see:
> >> File "/usr/lib/python3.6/zipfile.py", line 1198, in _RealGetContents
>>> raise BadZipFile("File is not a zip file")
>>> zipfile.BadZipFile:File is not a zip file```
Run:
from flyvec import FlyVec FlyVec.load(force_redownload=True)
# Citation
If you use this in your work, please cite:
@article{liang2021flyvec, title={Can a Fruit Fly Learn Word Embeddings?}, author={Liang, Yuchen and Ryali, Chaitanya K and Hoover, Benjamin and Grinberg, Leopold and Navlakha, Saket and Zaki, Mohammed J and Krotov, Dmitry}, journal={arXiv preprint arXiv:2101.06887}, year={2021} url={https://arxiv.org/abs/2101.06887} }
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file flyvec-0.3.2.tar.gz
.
File metadata
- Download URL: flyvec-0.3.2.tar.gz
- Upload date:
- Size: 46.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.22.0 setuptools/49.6.0.post20210108 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.6.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e0425c65761a9aff514a62b33be6c4398c045bfa108d5b899fb7b93a36fae711 |
|
MD5 | 0bb2cba31aa2ea43757f15bc28938a90 |
|
BLAKE2b-256 | 59f8d93ce7d82afdf571a387cfb4973bd5af1868b14c05fcc60203afa46b697f |
File details
Details for the file flyvec-0.3.2-py3-none-any.whl
.
File metadata
- Download URL: flyvec-0.3.2-py3-none-any.whl
- Upload date:
- Size: 51.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.22.0 setuptools/49.6.0.post20210108 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.6.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ad8c47bf8bbab13ed22f1a3436bc995a66882805f71eb9aec8e9aff5b849f40c |
|
MD5 | 311e6debffd33909a2c81d3cd0b3e5e4 |
|
BLAKE2b-256 | 902066d386e5ae6b3048139d003bb8c97cc0ad6280b06cbb7f5b453fbf5725d7 |