Efficient Numba based implementation of Gibbs Sampling Dirichlet Mixed Model.
Project description
tweetopic: Blazing Fast Topic modelling for Short Texts
:zap: Blazing Fast implementation of the Gibbs Sampling Dirichlet Mixture Model for topic modelling over short texts utilizing the power of :1234: Numpy and :snake: Numba.
The package uses the Movie Group Process algorithm described in Yin and Wang (2014).
Features
- Fast :zap:
- Scalable :collision:
- High consistency and coherence :dart:
- High quality topics :fire:
- Easy visualization and inspection :eyes:
🛠 Installation
The package might be released on PIP in the future, for now you can install it directly from Github with the following command:
pip install tweetopic
👩💻 Usage
To create a model you should import MovieGroupProcess
from the package:
from tweetopic import MovieGroupProcess
# Creating a model with 30 clusters
mgp = MovieGroupProcess(n_clusters=30, alpha=0.1, beta=0.1)
You may fit the model with a stream of short texts:
mgp.fit(
texts,
n_iterations=1000, # You may specify the number of iterations
max_df=0.1, # As well as parameters for the vectorizer.
min_df=15
)
To examine the structure of the clusters you can either look at the most frequently occuring words:
mgp.top_words(top_n=3)
-----------------------------------------------------------------
[
{'vaccine': 1011.0, 'coronavirus': 428.0, 'vaccines': 396.0},
{'afghanistan': 586.0, 'taliban': 509.0, 'says': 464.0},
{'man': 362.0, 'prison': 310.0, 'year': 288.0},
{'police': 567.0, 'floyd': 444.0, 'trial': 393.0},
{'media': 331.0, 'twitter': 321.0, 'facebook': 306.0},
...
{'pandemic': 432.0, 'year': 427.0, 'new': 422.0},
{'election': 759.0, 'trump': 573.0, 'republican': 527.0},
{'women': 91.0, 'heard': 84.0, 'depp': 76.0}
]
Or use rich visualizations provided by pyLDAvis:
mgp.visualize()
Note: You must install optional dependencies if you intend to use pyLDAvis
API reference
Limitations
tweetopic is so efficient and fast, as it exploits the fact that it's only short texts, we want to cluster. The number of unique terms in any document MUST be less than 256.
Additionally any term in any document may at most appear 255 times.
As of now, this requirement is not enforced by the package, please be aware of this, as it might cause strange behaviour.
Note: If it is so demanded, this restriction could be relaxed, make sure to file an issue, if you intend to use tweetopic for longer texts.
Differences from the gsdmm package
- tweetopic is usually orders of magnitude faster than gsdmm thanks to the clever use of data structures and numba accelarated loops. After some surface level benchmarking it seems that in many cases tweetopic performs the same task about 60 times faster.
- tweetopic supports texts where a term occurs multiple times. gsdmm only implements single occurance terms.
- gsdmm is no longer maintained
🎓 References
- Yin, J., & Wang, J. (2014). A Dirichlet Multinomial Mixture Model-Based Approach for Short Text Clustering. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 233–242). Association for Computing Machinery.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for tweetopic-0.0.3-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1021828966d81365e3389843f572b36d92a2fc607242b1c48e57f4cc83d87258 |
|
MD5 | db9f5198075b4acf61aa8fada8a8d870 |
|
BLAKE2b-256 | 6a535558a5ba54509897de3b38374ce03d35edd165b5625dbe829320d7af228c |