Skip to main content

Generates Insights from text pieces such as Documents or Articles

Project description

InsiGEN

A state of the art NLP model to generate insights from a text piece

Features of topic modelling

  • Generating a distribution of generalized topics covered in a document/articler
  • Extracting contextualized keywords from the text piece
  • Generating a summary of the text
  • Trained on a corpus of 6000 wikipedia articles for generalized topics
  • Can be trained on custom data for more specific topics

How to use the model

  • Clone this repository
  • Install the dependencies from the requirements.txt
  • Basic Usage:

Get a topic distribution

from insigen import insigen
model = insigen()
topic_distribution = model.get_distribution(document)

Important parameters for insigen:

  • use_pretrained_embeds: Setting this parameter to False will allow you to train your own embeddings. Further parameters need to be specified for training

  • embed_file: This parameter should be used when you've trained your own embeddings. Specify the path to your sentence embeddings.

  • dataset_file: This parameter should be used when you've trained your own embeddings. Specify the path to your own dataset.

  • embedding_model: (Default = all-mpnet-base-v2) Insigen uses sentence bert models to train it's embeddings. Valid models are:

             all-distilroberta-v1
             all-mpnet-base-v2
             all-MiniLM-L12-v2
             all-MiniLM-L6-v2
    

Important parameters for get_distribution

  • document: The text for which the topic distribution is to be generated
  • metric: This metric defines how the topics will be found. Can be set to 'threshold', to get all the topics above a similarity threshold. Defaults to 'max'. 'Max' metric gets the top "n" topics
  • max_count: This argument should be used with max metric. It specifies the top x amount of topics that get fetched. Defaults to 1.
  • threshold: This argument should be used with threshold metric. It specifies the threshold similarity over which all topics will be fetched. Defaults to 0.5.

Get keyword frequency

frequency = model.get_keyword_frequency(document, min_len=2, max_len=3)

#Generate a wordcloud using the frequency
cloud = model.generate_wordcloud(frequency)

Important parameters for get_keyword_frequency

  • document: The text for which the keyword frequency is to be generated
  • frequency_threshold: minimum frequency of a n-gram to be considered in the keywords (min_len and max_len are also used to adjust the length of n-grams in the text)

Generate Summary

summary = model.generate_summary(article, topic_match=relevant_topic))

# To get a list of topics, use this
#print(model.unique_topics)

Import parameters for generate_summary

  • document:The text for which the summary is to be generated
  • topic_match: a topic that can match with the text. This adds additional weight to sentences that are more related to the topic. use model.unique_topics to get a list of topics that can match. Defaults to None, in which case weightage to related sentence will not be given.
  • topic_weight: Adds weightage to the topic similarity score. Increasing this parameter results to more topic oriented summary. Defaults to 1.
  • similarity_weight: Adds weightage to sentence similarity score. Increasing this parameter results in extracting more co-related sentences. Defaults to 1.
  • position_weight: Adds weightage to the position of the sentences. Increasing this parameter results to more position oriented summary; i.e Texts present early in the document are given more weightatge. Defaults to 10.
  • num_sentences: This specifies the number of sentences that are to be included in the summary. Defaults to 10.

Train on your dataset

embeddings = model.train_embeds(dataset)

Important parameters for train_embeds

  • dataset: A pandas dataframe for the dataset to be trained
  • batch_size: Batches to divide the dataset into. Defaults to 32.

How does the model work?

Topic Distribution

  • Create embedded vectors of labelled training articles

  • Find mean embeddings of each topic in the corpus to create topic vectors and create clusters of articles
    image

  • Use KNN to place new articles in the topic vector cluster
    image

  • Chunking each article and finding relevant topic from the topic vectors
    image

Keyword extraction

  • N-grams and keywords are filtered from the text
  • Contextually similar keywords to the article are given higher scoring
  • A threshold is applied to the filtered list of keywords to get the final list of keywords

Summary Extraction

  • The PageRank algorithm is used to create a similarity matrix for sentences in the text
  • Additionally, sentences are scored based on their position in the text and their similarity to a relevant topic
  • Top N sentences from the similarity matrix are extracted to create a summary.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

insigen-0.1.1.tar.gz (22.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

insigen-0.1.1-py3-none-any.whl (19.9 kB view details)

Uploaded Python 3

File details

Details for the file insigen-0.1.1.tar.gz.

File metadata

  • Download URL: insigen-0.1.1.tar.gz
  • Upload date:
  • Size: 22.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.4

File hashes

Hashes for insigen-0.1.1.tar.gz
Algorithm Hash digest
SHA256 21f89be398dfee9eb7ef5e4566337d5565f91a7d876d5e32602121a8e3202568
MD5 6c14d73c7a94622853b92bb6b87e37d7
BLAKE2b-256 028757539db97775157fd130ced91bb1b4fe80acd07c4eeb1932fa0b6059caa1

See more details on using hashes here.

File details

Details for the file insigen-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: insigen-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 19.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.4

File hashes

Hashes for insigen-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6805268043019c7eba4c8f6a02bf012c2d706c9d3fea9f8167062fa715f30d29
MD5 9bd72e53a0b2d22e6e8aa33b808c81a3
BLAKE2b-256 bb1b46c275021f79fa84ffdb9848d9f9b7c648823a6526fbaf3932e96f6d04b4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page