Biterm Topic Model
Project description
Biterm Topic Model
Bitermplus implements Biterm topic model for short texts introduced by Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. Actually, it is a cythonized version of BTM. This package is also capable of computing perplexity, semantic coherence, and entropy metrics.
Development
Please note that bitermplus is actively improved. Refer to documentation to stay up to date.
Requirements
- cython
- numpy
- pandas
- scipy
- scikit-learn
- tqdm
Setup
Linux and Windows
There should be no issues with installing bitermplus under these OSes. You can install the package directly from PyPi.
pip install bitermplus
Or from this repo:
pip install git+https://github.com/maximtrp/bitermplus.git
Mac OS
First, you need to install XCode CLT and Homebrew.
Then, install libomp
using brew
:
xcode-select --install
brew install libomp
pip3 install bitermplus
If you have the following issue with libomp (fatal error: 'omp.h' file not found
), run brew info libomp
in the console:
brew info libomp
You should see the following output:
libomp: stable 15.0.5 (bottled) [keg-only]
LLVM's OpenMP runtime library
https://openmp.llvm.org/
/opt/homebrew/Cellar/libomp/15.0.5 (7 files, 1.6MB)
Poured from bottle on 2022-11-19 at 12:16:49
From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/libomp.rb
License: MIT
==> Dependencies
Build: cmake ✘, lit ✘
==> Caveats
libomp is keg-only, which means it was not symlinked into /opt/homebrew,
because it can override GCC headers and result in broken builds.
For compilers to find libomp you may need to set:
export LDFLAGS="-L/opt/homebrew/opt/libomp/lib"
export CPPFLAGS="-I/opt/homebrew/opt/libomp/include"
==> Analytics
install: 192,197 (30 days), 373,389 (90 days), 1,285,192 (365 days)
install-on-request: 24,388 (30 days), 48,013 (90 days), 164,666 (365 days)
build-error: 0 (30 days)
Export LDFLAGS
and CPPFLAGS
as suggested in brew output:
export LDFLAGS="-L/opt/homebrew/opt/libomp/lib"
export CPPFLAGS="-I/opt/homebrew/opt/libomp/include"
Example
Model fitting
import bitermplus as btm
import numpy as np
import pandas as pd
# IMPORTING DATA
df = pd.read_csv(
'dataset/SearchSnippets.txt.gz', header=None, names=['texts'])
texts = df['texts'].str.strip().tolist()
# PREPROCESSING
# Obtaining terms frequency in a sparse matrix and corpus vocabulary
X, vocabulary, vocab_dict = btm.get_words_freqs(texts)
tf = np.array(X.sum(axis=0)).ravel()
# Vectorizing documents
docs_vec = btm.get_vectorized_docs(texts, vocabulary)
docs_lens = list(map(len, docs_vec))
# Generating biterms
biterms = btm.get_biterms(docs_vec)
# INITIALIZING AND RUNNING MODEL
model = btm.BTM(
X, vocabulary, seed=12321, T=8, M=20, alpha=50/8, beta=0.01)
model.fit(biterms, iterations=20)
p_zd = model.transform(docs_vec)
# METRICS
perplexity = btm.perplexity(model.matrix_topics_words_, p_zd, X, 8)
coherence = btm.coherence(model.matrix_topics_words_, X, M=20)
# or
perplexity = model.perplexity_
coherence = model.coherence_
# LABELS
model.labels_
# or
btm.get_docs_top_topic(texts, model.matrix_docs_topics_)
Results visualization
You need to install tmplot first.
import tmplot as tmp
tmp.report(model=model, docs=texts)
Tutorial
There is a tutorial in documentation that covers the important steps of topic modeling (including stability measures and results visualization).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file bitermplus-0.7.0.tar.gz
.
File metadata
- Download URL: bitermplus-0.7.0.tar.gz
- Upload date:
- Size: 264.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | acb0c8b3aa44f2e8498236d1d226abaed395bbbbba1c560dcf9b8c7422690c79 |
|
MD5 | 1f92a49a4d7a86f28189f64700750ebf |
|
BLAKE2b-256 | 1839797484bdd7d9278611fd9fe4c087e0ec3658b1fd8ee22de3a42f7b5b2745 |