Topic modeling using sentence_transformer
Project description
transformertopic
Topic Modeling using sentence embeddings. The procedure is:
- compute sentence embeddings
- compute dimension reduction of these
- cluster them
- compute a human-readable representation of each cluster/topic
This is inspired by the Topic Modeling procedure described here by Maarten Grootendorst, who also has his own implementation available here.
Usage
Choose a reducer
from transformertopic.dimensionReducers import PacmapEmbeddings, UmapEmbeddings, TsneEmbeddings
#reducer = PacmapEmbeddings()
#reducer = TsneEmbeddings()
reducer = UmapEmbeddings(umapNNeighbors=13)
Init and run the model
from transformertopic import TransformerTopic
tt = TransformerTopic(dimensionReducer=reducer, hdbscanMinClusterSize=20)
tt.train(documentsDataFrame=pandasDf, dateColumn='date', textColumn='coref_text', copyOtherColumns = True)
print(f"Found {tt.nTopics} topics")
print(tt.df.info())
Show sizes of largest topics
N = 10
topNtopics = tt.showTopicSizes(N)
Choose a cluster representator and show wordclouds for the biggest topics
from transformertopic.clusterRepresentators import TextRank, Tfidf, KMaxoids
representator = Tfidf()
# representator = TextRank()
tt.showWordclouds(topNtopics clusterRepresentator=representator)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
transformertopic-1.0.tar.gz
(10.7 kB
view hashes)
Built Distribution
Close
Hashes for transformertopic-1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c71d7a8657bf66193025bde2174870dee8e17ad3b9190c042ce97fd7929b40fd |
|
MD5 | 96dd47af197be613d901c763f0297bae |
|
BLAKE2b-256 | 6434ef6e6f7f1176ede5b1b1bdea15497464b738b687de8c5748871ad5fe24b6 |