Topic modeling using sentence_transformer
Project description
transformertopic
Topic Modeling using sentence embeddings. The procedure is:
- compute sentence embeddings
- compute dimension reduction of these
- cluster them
- compute a human-readable representation of each cluster/topic
This is inspired by the Topic Modeling procedure described here by Maarten Grootendorst, who also has his own implementation available here.
Usage
Choose a reducer
from transformertopic.dimensionReducers import PacmapEmbeddings, UmapEmbeddings, TsneEmbeddings
#reducer = PacmapEmbeddings()
#reducer = TsneEmbeddings()
reducer = UmapEmbeddings(umapNNeighbors=13)
Init and run the model
from transformertopic import TransformerTopic
tt = TransformerTopic(dimensionReducer=reducer, hdbscanMinClusterSize=20)
tt.train(documentsDataFrame=pandasDf, dateColumn='date', textColumn='coref_text', copyOtherColumns = True)
print(f"Found {tt.nTopics} topics")
print(tt.df.info())
Show sizes of largest topics
N = 10
topNtopics = tt.showTopicSizes(N)
Choose a cluster representator and show wordclouds for the biggest topics
from transformertopic.clusterRepresentators import TextRank, Tfidf, KMaxoids
representator = Tfidf()
# representator = TextRank()
tt.showWordclouds(topNtopics clusterRepresentator=representator)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Close
Hashes for transformertopic-1.2.linux-x86_64.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | e5b33c2d1c3be8f17f9cb920f837f9fe5cdd8b7accfeb7d0d27825cb63e01124 |
|
MD5 | 10ef72384d040bc711404c853fdae84b |
|
BLAKE2b-256 | 378f747ad45bed3edf8916cd1fb80f7da16c16b71880884980549cf91cb5d20a |