Skip to main content

Topmost: A Topic Modeling System Tookit

Project description

Github Stars Downloads PyPi Documentation Status License Contributors arXiv

TopMost provides complete lifecycles of topic modeling, including datasets, preprocessing, models, training, and evaluations. It covers the most popular topic modeling scenarios, like basic, dynamic, hierarchical, and cross-lingual topic modeling.

Check our survey paper on neural topic models, accepted to Artificial Intelligence Review: A Survey on Neural Topic Models: Methods, Applications, and Challenges.

If you want to use TopMost, please cite as
@article{wu2023topmost,
    title={Towards the TopMost: A Topic Modeling System Toolkit},
    author={Wu, Xiaobao and Pan, Fengjun and Luu, Anh Tuan},
    journal={arXiv preprint arXiv:2309.06908},
    year={2023}
}

@article{wu2023survey,
    title={A Survey on Neural Topic Models: Methods, Applications, and Challenges},
    author={Wu, Xiaobao and Nguyen, Thong and Luu, Anh Tuan},
    journal={Artificial Intelligence Review},
    url={https://doi.org/10.1007/s10462-023-10661-7},
    year={2024},
    publisher={Springer}
}

Overview

TopMost offers the following topic modeling scenarios with models, evaluation metrics, and datasets:

https://github.com/BobXWu/TopMost/raw/main/docs/source/_static/architecture.svg

Scenario

Model

Evaluation Metric

Datasets

Basic Topic Modeling
TC
TD
Clustering
Classification
20NG
IMDB
NeurIPS
ACL
NYT
Wikitext-103
Hierarchical
Topic Modeling
TC over levels
TD over levels
Clustering over levels
Classification over levels
20NG
IMDB
NeurIPS
ACL
NYT
Wikitext-103
Dynamic
Topic Modeling
TC over time slices
TD over time slices
Clustering
Classification
NeurIPS
ACL
NYT
Cross-lingual
Topic Modeling
TC (CNPMI)
TD over languages
Classification (Intra and Cross-lingual)

ECNews
Amazon
Review Rakuten

Quick Start

Install TopMost

Install topmost with pip as

$ pip install topmost

We try FASTopic to get the top words of discovered topics, topic_top_words and the topic distributions of documents, doc_topic_dist. The preprocessing steps are configurable. See our documentations.

import topmost
from topmost.data import RawDataset
from topmost.preprocessing import Preprocessing
from sklearn.datasets import fetch_20newsgroups

docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']
preprocessing = Preprocessing(vocab_size=10000, stopwords='English')

device = 'cuda' # or 'cpu'
dataset = RawDataset(docs, preprocessing, device=device)

trainer = topmost.trainers.FASTopicTrainer(dataset, verbose=True)
top_words, doc_topic_dist = trainer.train()

new_docs = [
    "This is a document about space, including words like space, satellite, launch, orbit.",
    "This is a document about Microsoft Windows, including words like windows, files, dos."
]

new_theta = trainer.test(new_docs)
print(new_theta.argmax(1))

Usage

Download a preprocessed dataset

import topmost
from topmost.data import download_dataset

download_dataset('20NG', cache_path='./datasets')

Train a model

device = "cuda" # or "cpu"

# load a preprocessed dataset
dataset = topmost.data.BasicDataset("./datasets/20NG", device=device, read_labels=True)
# create a model
model = topmost.models.ProdLDA(dataset.vocab_size)
model = model.to(device)

# create a trainer
trainer = topmost.trainers.BasicTrainer(model, dataset)

# train the model
top_words, train_theta = trainer.train()

Evaluate

# evaluate topic diversity
TD = topmost.evaluations.compute_topic_diversity(top_words)

# get doc-topic distributions of testing samples
test_theta = trainer.test(dataset.test_data)
# evaluate clustering
clustering_results = topmost.evaluations.evaluate_clustering(test_theta, dataset.test_labels)
# evaluate classification
classification_results = topmost.evaluations.evaluate_classification(train_theta, test_theta, dataset.train_labels, dataset.test_labels)

Test new documents

import torch
from topmost.preprocessing import Preprocessing

new_docs = [
    "This is a new document about space, including words like space, satellite, launch, orbit.",
    "This is a new document about Microsoft Windows, including words like windows, files, dos."
]

preprocessing = Preprocessing()
new_parsed_docs, new_bow = preprocessing.parse(new_docs, vocab=dataset.vocab)
new_theta = trainer.test(torch.as_tensor(new_bow, device=device).float())

Installation

Stable release

To install TopMost, run this command in the terminal:

$ pip install topmost

This is the preferred method to install TopMost, as it will always install the most recent stable release.

From sources

The sources for TopMost can be downloaded from the Github repository.

$ pip install git+https://github.com/bobxwu/TopMost.git

Tutorials

We provide tutorials for different usages:

Name

Link

Quickstart

Open In GitHub

How to preprocess datasets

Open In GitHub

How to train and evaluate a basic topic model

Open In GitHub

How to train and evaluate a hierarchical topic model

Open In GitHub

How to train and evaluate a dynamic topic model

Open In GitHub

How to train and evaluate a cross-lingual topic model

Open In GitHub

Disclaimer

This library includes some datasets for demonstration. If you are a dataset owner who wants to exclude your dataset from this library, please contact Xiaobao Wu.

Authors

xiaobao-figure Xiaobao Wu

fengjun-figure Fengjun Pan

Contributors

Contributors

Acknowledgments

  • Icon by Flat-icons-com.

  • If you want to add any models to this package, we welcome your pull requests.

  • If you encounter any problem, please either directly contact Xiaobao Wu or leave an issue in the GitHub repo.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

topmost-0.0.5.tar.gz (55.7 kB view details)

Uploaded Source

Built Distribution

topmost-0.0.5-py3-none-any.whl (83.0 kB view details)

Uploaded Python 3

File details

Details for the file topmost-0.0.5.tar.gz.

File metadata

  • Download URL: topmost-0.0.5.tar.gz
  • Upload date:
  • Size: 55.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.9.19

File hashes

Hashes for topmost-0.0.5.tar.gz
Algorithm Hash digest
SHA256 3dccc58374284ba359e2f1a77964e9fc8760d4ce0e45e3579dc7d65274c6a363
MD5 8708e42ced1d1e92d1a3f5a8ac7200a1
BLAKE2b-256 6b76414cdd6a112b85b887fab214ab6dd65dc39e1084af880006ac3786b422cc

See more details on using hashes here.

Provenance

File details

Details for the file topmost-0.0.5-py3-none-any.whl.

File metadata

  • Download URL: topmost-0.0.5-py3-none-any.whl
  • Upload date:
  • Size: 83.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.9.19

File hashes

Hashes for topmost-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 16459cd05f9bc98477c14f4463d87461aa80b8124155477f19601e13bcb6e77d
MD5 b18c5dedcf11ddc97e67f24957f6a2b2
BLAKE2b-256 05f9b3557753fc99e27181a6c64073f1d8489b2e3778714e609385b8b018ff66

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page