Skip to main content

Community Topic - Topic Modelling Method

Project description

Community Topic

Introduction

  • What is Community Topic?

In this repository we present our novel method called Community Topic for Topic Modelleling as a Pypi library. Our algorithm, Community Topic, is based on mining communities of terms from term-occurrence networks extracted from the documents. In addition to providing interpretable collections of terms as topics, the network representation provides a natural topic structure. The topics form a network, so topic similarity is inferred from the weights of the edges between them. Super-topics can be found by iteratively applying community detection on the topic network, grouping similar topics together. Sub-topics can be found by iteratively applying community detection on a single topic community. This can be done dynamically, with the user or conversation agent moving up and down the topic hierarchy as desired.

  • What problem does it solve? & Who is it for?

Unfortunately, the most popular topic models in use today do not provide a suitable topic structure for these purposes and the state-of-the-art models based on neural networks suffer from many of the same drawbacks while requiring specialized hardware and many hours to train. This makes Community Topic an ideal topic modelling algorithm for both applied research and practical applications like conversational agents.

Requirement & Installation

  • System requirement

    Python >= 3.6
    commodity hardware
    setuptools~=67.6.0
    spacy~=3.5.0
    numpy~=1.21.5
    gensim~=4.2.0
    networkx~=2.8.4
    igraph~=0.10.4
    
  • Installation Option

The easy way to install CommunityTopic is:

  pip install communitytopic

Datasets and Evaluation Metrics Used

We have used following dataset for our experiment.

Name of the Dataset Source Source Language
BBC BBC English
20Newsgroups 20Newsgroups English
Reuters21578 Reuters21578 English
Europarl Europarl English, Italian, French, German, Spanish

Also we have used following metrics for our Evaluation:

1. Coherences To compare different topic models, we use two coherence measures: c_v and c_npmi. Both measures have been shown to correlate with human judgements of topic quality with CV having the strongest correlation

2. Diversity Measures

  • Proportion of unique words (PWU): Computes the proportion of unique words for the topic
  • Average Pairwise Jaccard Diversity (PJD): Coomputes the average pairwise jaccard distance between the topics.
  • Inverted Rank-Biased Overlap (IRBO): Computes score of the rank biased overlap over the topics.

3. Hierarchical Measures

  • Topic Specialization: measures the distance of a topic’s probability distribution over terms from thegeneral probability distribution of all terms in the corpus given by their occurrence frequency. We expect topics at higher levels in the hierarchy closer to theroot to be more general and less specialized and topics further down the hierarchy to be more specialized
  • Topic Affinity: measures the similarity between a super-topic and a set of sub-topics. We expect higher affinity between a parent topic and its children and lower affinity between a parent topic and sub-topics which are not its children

Getting Started (Try it out)

This is an example tuotrial which finds topic of BBC dataset using best combination for Pre-Processing and Community Topic Algorithm.

Step 1: import necessary class of the library

from communitytopic import CommunityTopic
from communitytopic import PreProcessing

Step 2: Load raw corpus as the dataset, here we are using BBC dataset.

with open("<Path-To-Dataset>/bbc_train.txt", "r", encoding='utf-8') as f:
      bbc_train = f.read()
      
with open("<Path-To-Dataset>/bbc_test.txt", "r", encoding='utf-8') as f:
      bbc_test = f.read()

Step 3: Performing pre-processing on training and testing corpus

tokenized_bbc_train_sents, tokenized_bbc_train_docs, tokenized_bbc_test_docs, dictionary = PreProcessing.do_preprocessing(
        train=bbc_train,
        test=bbc_test,
        ner=1,
        pos_filter=3,
        phrases="npmi",
        phrase_threshold=0.35,
        language="en")

Step 4: Applying Community Topic algorithm on pre-processed data

community_topic = CommunityTopic(train_corpus=tokenized_bbc_train_sents,  dictionary=dictionary)

Step 5: Applying Community Topic algorithm on pre-processed data

community_topic = CommunityTopic(train_corpus=tokenized_bbc_train_sents,  dictionary=dictionary)
community_topic.fit()

Step 6: Get topic words founded by abovr algorithm

topic_words = community_topic.get_topics_words_topn(10)

API Usage

Following are the API functions that we expose by this library code:

Method Code
Fit the flat topic model .fit()
Fit the hiearchical topic model .fit_hierarchical()
Get flat topic words .get_topics_words()
Get topn n flat topic word .get_topics_words_topn(n=10)
Get flat topics as dictionary id .get_topics()
Get hierarchical topic words .get_topic_words_hierarchical()
Get hierarchical topic as dictionary id an ig_graph of that topic .get_topics_hierarchical()
Geet first n levels in hierarchy .get_n_level_topic_words_hierarchical(n_level=2)
Geet hierarchical topic words in a tree-like dictionary format .get_hierarchy_tree

Conclusion

Project details


Release history Release notifications | RSS feed

This version

0.1

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

communitytopic-0.1.tar.gz (16.4 kB view details)

Uploaded Source

File details

Details for the file communitytopic-0.1.tar.gz.

File metadata

  • Download URL: communitytopic-0.1.tar.gz
  • Upload date:
  • Size: 16.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.13

File hashes

Hashes for communitytopic-0.1.tar.gz
Algorithm Hash digest
SHA256 c16acb5476e2d037975d2822998a98d69609fb284fb5f499ebfa367bc46ff6a3
MD5 fba051e44e3f72e84ed81bc41f9b2b22
BLAKE2b-256 dc2c93d68cf6442d3087a141f43b1ab497286e144ee14ecee06d5a58977fb9f3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page