Tool for creating document features
Vectors of Locally Aggregated Concepts (VLAC)
As illustrated in the Figure below, VLAC clusters word embeddings to create k concepts. Due to the high dimensionality of word embeddings (i.e., 300) spherical k-means is used to perform the clustering as applying euclidean distance will result in little difference in the distances between samples. The method works as follows. Let wi be a word embedding of size D assigned to cluster center ck. Then, for each word in a document, VLAC computes the element-wise sum of residuals of each word embedding to its assigned cluster center. This results in k feature vectors, one for each concept, and all of size D. All feature vectors are then concatenated, power normalized, and finally, l2 normalization is applied. For example, if 10 concepts were to be created out of word embeddings of size 300 then the resulting document vector would contain 10 x 300 values.
Tested in python 3.5.4.
# Train model and transform collection of documents vlac_model = VLAC(documents=train_docs, model=model, oov=False) vlac_features, kmeans = vlac_model.fit_transform(num_concepts=30) # Create features new documents vlac_model = VLAC(documents=train_docs, model=model, oov=False) test_features = vlac_model.transform(kmeans=kmeans)
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size Vectors_of_Locally_Aggregate_Concepts-0.1-py3-none-any.whl (5.4 kB)||File type Wheel||Python version py3||Upload date||Hashes View|
|Filename, size Vectors of Locally Aggregate Concepts-0.1.tar.gz (3.7 kB)||File type Source||Python version None||Upload date||Hashes View|
Hashes for Vectors_of_Locally_Aggregate_Concepts-0.1-py3-none-any.whl
Hashes for Vectors of Locally Aggregate Concepts-0.1.tar.gz