Skip to main content

Efficient Numba based implementation of Gibbs Sampling Dirichlet Mixed Model.

Project description

Logo with text

tweetopic: Blazing Fast Topic modelling for Short Texts

PyPI version pip downloads python version Code style: black
NumPy SciPy scikit-learn

:zap: Blazing Fast implementation of the Gibbs Sampling Dirichlet Mixture Model for topic modelling over short texts utilizing the power of :1234: Numpy and :snake: Numba.
The package uses the Movie Group Process algorithm described in Yin and Wang (2014).

Features

  • Fast :zap:
  • Scalable :collision:
  • High consistency and coherence :dart:
  • High quality topics :fire:
  • Easy visualization and inspection :eyes:

🛠 Installation

The package might be released on PIP in the future, for now you can install it directly from Github with the following command:

pip install tweetopic

👩‍💻 Usage

To create a model you should import MovieGroupProcess from the package:

from tweetopic import MovieGroupProcess

# Creating a model with 30 clusters
mgp = MovieGroupProcess(n_clusters=30, alpha=0.1, beta=0.1)

You may fit the model with a stream of short texts:

mgp.fit(
    texts,
    n_iterations=1000, # You may specify the number of iterations
    max_df=0.1, # As well as parameters for the vectorizer.
    min_df=15
)

To examine the structure of the clusters you can either look at the most frequently occuring words:

mgp.top_words(top_n=3)
-----------------------------------------------------------------

[
    {'vaccine': 1011.0, 'coronavirus': 428.0, 'vaccines': 396.0},
    {'afghanistan': 586.0, 'taliban': 509.0, 'says': 464.0},
    {'man': 362.0, 'prison': 310.0, 'year': 288.0},
    {'police': 567.0, 'floyd': 444.0, 'trial': 393.0},
    {'media': 331.0, 'twitter': 321.0, 'facebook': 306.0},
    ...
    {'pandemic': 432.0, 'year': 427.0, 'new': 422.0},
    {'election': 759.0, 'trump': 573.0, 'republican': 527.0},
    {'women': 91.0, 'heard': 84.0, 'depp': 76.0}
]

Or use rich visualizations provided by pyLDAvis:

mgp.visualize()

PyLDAvis visualization

Note: You must install optional dependencies if you intend to use pyLDAvis

API reference

Limitations

tweetopic is so efficient and fast, as it exploits the fact that it's only short texts, we want to cluster. The number of unique terms in any document MUST be less than 256. Additionally any term in any document may at most appear 255 times.
As of now, this requirement is not enforced by the package, please be aware of this, as it might cause strange behaviour.

Note: If it is so demanded, this restriction could be relaxed, make sure to file an issue, if you intend to use tweetopic for longer texts.

Differences from the gsdmm package

  • tweetopic is usually orders of magnitude faster than gsdmm thanks to the clever use of data structures and numba accelarated loops. After some surface level benchmarking it seems that in many cases tweetopic performs the same task about 60 times faster.
  • tweetopic supports texts where a term occurs multiple times. gsdmm only implements single occurance terms.
  • gsdmm is no longer maintained

🎓 References

  • Yin, J., & Wang, J. (2014). A Dirichlet Multinomial Mixture Model-Based Approach for Short Text Clustering. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 233–242). Association for Computing Machinery.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tweetopic-0.0.3.tar.gz (13.1 kB view hashes)

Uploaded Source

Built Distribution

tweetopic-0.0.3-py2.py3-none-any.whl (11.8 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page