Implementation of Gaussian LDA topic model, with efficiency tricks
Project description
Gaussian LDA
Another implementation of the paper Gaussian LDA for Topic Models with Word Embeddings.
This is a Python implementation based as closely as possible on the Java implementation released by the paper's authors.
Installation
You'll first need to install the choldate
package, following its installation
instructions. (It's not
possible to include this as a dependency for the PyPi package.)
Then install gaussianlda using Pip:
pip install gaussianlda
Usage
The package provides two classes for training Gaussian LDA:
- Cholesky only,
gaussianlda.GaussianLDATrainer
: Simple Gibbs sampler with optional Cholesky decomposition trick. - Cholesky+aliasing,
gaussianlda.GaussianLDAAliasTrainer
: Cholesky decomposition (not optional) and the Vose aliasing trick.
The trainer is prepared by instantiating the training class:
- corpus: List of documents, where each document is a list of int IDs of words. These are IDs into the vocabulary and the embeddings matrix.
- vocab_embeddings: (V, D) Numpy array, where V is the number of words in the vocabulary and D is the dimensionality of the embeddings.
- vocab: Vocabulary, given as a list of words, whose position corresponds to the indices using in the data. This is not strictly needed for training, but is used to output topics.
- num_tables: Number of topics to learn.
- alpha, kappa: Hyperparameters to the doc-topic Dirichlet and the inverse Wishart prior
- save_path: Path to write the model out to after each iteration.
- mh_steps (aliasing only): Number of Montecarlo-Hastings steps for each topic sample.
Then you set the sampler running for a specified number of iterations
over the training data by calling trainer.sample(num_iters)
.
Example
import numpy as np
from gaussianlda import GaussianLDAAliasTrainer
# A small vocabulary as a list of words
vocab = "money business bank finance sheep cow goat pig".split()
# A random embedding for each word
# Really, you'd want to load something more useful!
embeddings = np.random.sample((8, 100), dtype=np.float32)
corpus = [
[0, 2, 1, 1, 3, 0, 6, 1],
[3, 1, 1, 3, 7, 0, 1, 2],
[7, 5, 4, 7, 7, 4, 6],
[5, 6, 1, 7, 7, 5, 6, 4],
]
# Prepare a trainer
trainer = GaussianLDAAliasTrainer(
corpus, embeddings, vocab, 2, 0.1, 0.1
)
# Set training running
trainer.sample(10)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file gaussianlda-0.2.4.tar.gz
.
File metadata
- Download URL: gaussianlda-0.2.4.tar.gz
- Upload date:
- Size: 49.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.15.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.1.3 requests-toolbelt/0.9.1 tqdm/4.45.0 CPython/3.5.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5254bc8b81ffd00d0ae3bf4d6d435bb64f77b528bd6ca0831978b2b04995c41e |
|
MD5 | bded5bd25d31539d1d0309f165c39821 |
|
BLAKE2b-256 | d4c2083c9aac3b0f50589ca4dd2056e826588b7b925e59c07e7a13d59551e9fe |
File details
Details for the file gaussianlda-0.2.4-py3-none-any.whl
.
File metadata
- Download URL: gaussianlda-0.2.4-py3-none-any.whl
- Upload date:
- Size: 57.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.15.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.1.3 requests-toolbelt/0.9.1 tqdm/4.45.0 CPython/3.5.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 03fc6a4618dbfe39f204a89116eaf60398810469d0a1c06a577fea0edc86e197 |
|
MD5 | d410ced78b3f0ce6f35cd7bb9fce36e5 |
|
BLAKE2b-256 | a18f0dc316110e3796e60c83c051073a90ede877b1d2afad8e7ef15065213f4d |