Skip to main content

Unsupervised keyword extraction in Python

Project description


rtd travis codecov pyversions pypi pypistatus license codestyle contributions

Unsupervised keyword extraction in Python

Jump to: MethodsAlgorithmUsageVisualsTo-Do

kwx is a toolkit for unsupervised keyword extraction based on Latent Dirichlet Allocation and Google's BERT. It provides a multilingual suite of methods to process texts and then extract and analyze keywords from the created corpus. A unique focus is allowing users to decide which words to not include in outputs, thereby allowing them to use their own intuitions to fine tune the modeling process.

For a thorough overview of the process and techniques see the Google slides, and reference the documentation for explanations of the models and visualization methods. Also see kwx.languages for language dependencies.

Installation via PyPi

pip install kwx
import kwx

Methods

LDA

Latent Dirichlet Allocation is a generative statistical model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar. In the case of kwx, documents or text entries are posited to be a mixture of a given number of topics, and the presence of each word in a text body comes from its relation to these derived topics.

Although not as statistically strong as the following models, LDA provides quick results that are suitable for many applications.

BERT

Bidirectional Encoder Representations from Transformers derives representations of words based running nlp models over open source Wikipedia data. These representations are then able to be leveraged to derive corpus topics.

LDA with BERT embeddings

The combination of LDA with BERT via kwx.autoencoder.

Other

The user can also choose to simply query the most common words from a text corpus or compute TFIDF (Term Frequency Inverse Document Frequency) keywords - those that are unique in a text body in comparison to another that's compared. The former method is used in kwx as a baseline to check model efficacy, and the latter is a useful baseline when a user has another text or text body to compare the target corpus against.

Algorithm

The basic structure of kwx's machine learning based keyword extraction algorithms is the following:

  • The user inputs a desired number of keywords
  • The user inputs a number of topics to use, or this is determined by optimizing topic coherence and overlap across a potential topic numbers
  • The texts are fully cleaned and tokenized
  • Topics are derived for the text corpus
  • The prevalence of topics in the text corpus is found
    • For example: topic 1 is 25% coherent to the texts, topic 2 45%, and topic 3 30%
    • These percentages come from averaging topic coherence across all texts
  • Words are selected from the derived topics based on their coherence to the text body
    • If a word has already been selected, then the next word in the topic will be chosen
    • From the above example: the best 25%, 45% and 30% of words from topics 1-3 are selected
    • Words are selected from less coherent topics first (common words come from weakly coherent topics, and unique words come from those with strong coherence)
  • The user is presented the extracted keywords and asked if they're appropriate
    • They can then indicate words to be removed and replaced
    • Keywords are finalized when the user indicates that no more words need to be removed
  • Optionally: the keywords are put into a text file, and this along with desired visuals is saved into a directory or zipped

Usage

Keyword extraction can be useful to analyze surveys, tweets, other kinds of social media posts, research papers, and further classes of texts. examples.extract_kws provides an example of how to use kwx by deriving keywords from tweets in the Kaggle Twitter US Airline Sentiment dataset.

The following pseudoscope presents a brief outline of using kwx to derive keywords from a text corpus:

from kwx.utils import prepare_data

input_language = "english"
num_keywords = 10
num_topics = 10
ignore_words = ["words", "user", "knows", "are", "not", "wanted"]

# Arguments from examples.extract_kws
text_corpus = prepare_data(
    data='df-or-csv/xlsx-path',
    target_cols='cols-where-texts-are',
    input_language=input_language,
    min_freq=2, # remove infrequent words
    min_word_len=4, # remove small words
    sample_size=1, # sample size for testing)

bert_kws = extract_kws(
    method='BERT',
    text_corpus=text_corpus,
    clean_texts=None, # argument for BERT models
    input_language=input_language,
    output_language=None, # allows the output to be translated
    num_keywords=num_keywords,
    num_topics=num_topics,
    corpuses_to_compare=None, # for TFIDF
    return_topics=False, # to inspect topics rather than produce kws
    ignore_words=ignore_words,
    min_freq=2,
    min_word_len=4,
    sample_size=1
)

kwx.model.gen_files saves the results of the above keyword extraction process in a directory or zip file along with desired visuals from the following section.

Visuals

kwx.visuals includes functions for both presenting and analyzing the results of keyword extraction. Each can be called individually, or included in the final output folder or zip.

Topic Number Evaluation

A graph of topic coherence and overlap given a variable number of topics to derive keywords from. The return_ideal_metrics argument is further used to derive the ideal model and topic number in kwx.model.gen_files - allowing for a single run of models and topic numbers to derive optimal keywords.

from kwx.visuals import graph_topic_num_evals

graph_topic_num_evals(
    method=["lda", "bert", "lda_bert"],
    text_corpus=text_corpus,
    clean_texts=None,
    input_language=input_language,
    num_keywords=num_keywords,
    topic_nums_to_compare=list(range(5,15)),
    min_freq=2,
    min_word_len=4,
    sample_size=1,
    metrics=True, # stability and coherence
    fig_size=(20, 10),
    save_file=False,
    return_ideal_metrics=False, # selects ideal model given metrics
)

Word Cloud

Word clouds via wordcloud are included for a basic representation of the text corpus - specifically being a way to convey basic visual information to potential stakeholders. The following figure from examples.extract_kws shows a word cloud generated from tweets of US air carrier passengers:

from kwx.visuals import gen_word_cloud

ignore_words = []

gen_word_cloud(
    text_corpus=text_corpus,
    input_language=input_language,
    ignore_words=None,
    min_freq=2,
    min_word_len=4,
    sample_size=1,
    height=500,
    save_file=False,
)

pyLDAvis

pyLDAvis is included so that users can inspect LDA extracted topics, and further so that it can easily be generated for output files.

from kwx.visuals import pyLDAvis_topics

pyLDAvis_topics(
    method="lda",
    text_corpus=text_corpus,
    input_language=input_language,
    num_topics=10,
    min_freq=2,
    min_word_len=4,
    sample_size=1,
    save_file=False,
    display_ipython=False, # For Jupyter integration
)

t-SNE

t-SNE allows the user to visualize their topic distribution in both two and three dimensions. Currently available just for LDA, this technique provides another check for model suitability.

from kwx.visuals import t_sne

t_sne(
    dimension="both", # 2d and 3d are options
    text_corpus=None,
    num_topics=10,
    remove_3d_outliers=True,
    fig_size=(20, 10),
    save_file=False,
)

To-Do

  • Including more methods to extract keywords, as well as improving the current ones
  • Adding BERT sentence-transformers language models as an argument in kwx.model.extract_kws
  • Allowing key phrase extraction
  • Adding t-SNE and pyLDAvis style visualizations for BERT models
  • Including more options to fine tune the cleaning process in kwx.utils
  • Updates to kwx.languages as lemmatization and other linguistic package dependencies evolve
  • Creating, improving and sharing examples
  • Updating and refining the documentation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kwx-0.0.1.tar.gz (32.4 kB view details)

Uploaded Source

Built Distribution

kwx-0.0.1-py3-none-any.whl (301.0 kB view details)

Uploaded Python 3

File details

Details for the file kwx-0.0.1.tar.gz.

File metadata

  • Download URL: kwx-0.0.1.tar.gz
  • Upload date:
  • Size: 32.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.6.1 requests/2.25.0 setuptools/52.0.0 requests-toolbelt/0.9.1 tqdm/4.54.0 CPython/3.7.7

File hashes

Hashes for kwx-0.0.1.tar.gz
Algorithm Hash digest
SHA256 0713b35f08cdc042bb2a0a3ce6ad734cc74b3bdcc336e4a721544f22bbe91a0f
MD5 6b0d2cbdf090ec07802dc1a72b770ced
BLAKE2b-256 8729b562af90697f5ece0da34c1f0e08c83cbd38d30deb2d9db9ffeb4344f91f

See more details on using hashes here.

File details

Details for the file kwx-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: kwx-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 301.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.6.1 requests/2.25.0 setuptools/52.0.0 requests-toolbelt/0.9.1 tqdm/4.54.0 CPython/3.7.7

File hashes

Hashes for kwx-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3ad4c80b1a1729110e74baf845c8776fcc0e405bbd6c5090fcfdcb6da8ef372c
MD5 1a870647f1d15936159d1f117e34b742
BLAKE2b-256 1ed524dd7fc2f286c9e339cd2e169481d0201ee4baad6b3335c14aae1c62c732

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page