Skip to main content

Unsupervised BERT and LDA based keyword extraction in Python

Project description


rtd ci codecov pyversions pypi pypistatus license contributions coc codestyle

Unsupervised BERT and LDA based keyword extraction in Python

Jump to: ModelsUsageVisualsTo-Do

kwx is a toolkit for unsupervised keyword extraction based on Latent Dirichlet Allocation and Google's BERT. It provides a multilingual suite of methods to process texts and then extract and analyze keywords from the created corpus (see kwx.languages for all available languages). A unique focus is allowing users to decide which words to not include in outputs, thereby allowing them to use their own intuitions to fine tune the modeling process.

For a thorough overview of the process and techniques see the Google slides, and reference the documentation for explanations of the models and visualization methods.

Installation via PyPi

kwx can be downloaded from pypi via pip our sourced directly from this repository:

pip install kwx
git clone https://github.com/andrewtavis/kwx.git
cd kwx
python setup.py install
import kwx

Models

LDA

Latent Dirichlet Allocation is a generative statistical model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar. In the case of kwx, documents or text entries are posited to be a mixture of a given number of topics, and the presence of each word in a text body comes from its relation to these derived topics.

Although not as statistically strong as the following machine learning models, LDA provides quick results that are suitable for many applications.

BERT

Bidirectional Encoder Representations from Transformers derives representations of words based running nlp models over open source Wikipedia data. These representations are then able to be leveraged to derive corpus topics.

LDA with BERT embeddings

The combination of LDA with BERT via kwx.autoencoder.

Other

The user can also choose to simply query the most common words from a text corpus or compute TFIDF (Term Frequency Inverse Document Frequency) keywords - those that are unique in a text body in comparison to another that's compared. The former method is used in kwx as a baseline to check model efficacy, and the latter is a useful baseline when a user has another text or text body to compare the target corpus against.

Usage

Keyword extraction can be useful to analyze surveys, tweets, other kinds of social media posts, research papers, and further classes of texts. examples.kw_extraction provides an example of how to use kwx by deriving keywords from tweets in the Kaggle Twitter US Airline Sentiment dataset.

The following outlines using kwx to derive keywords from a text corpus with prompt_remove_words as True (the user will be asked if some of the extracted words need to be replaced):

from kwx.utils import prepare_data
from kwx.model import extract_kws

input_language = "english" # see kwx.languages for options
num_keywords = 15
num_topics = 10
ignore_words = ["words", "user", "knows", "they", "don't", "want"]

# Arguments from examples.kw_extraction
text_corpus = prepare_data(
    data='df-or-csv/xlsx-path',
    target_cols='cols-where-texts-are',
    input_language=input_language,
    min_freq=2,  # remove infrequent words
    min_word_len=3,  # remove small words
    sample_size=1,  # sample size (for testing)
)[0]

bert_kws = extract_kws(
    method='BERT', # 'LDA', 'BERT', or 'LDA_BERT'
    text_corpus=text_corpus,
    input_language=input_language,
    output_language=None,  # allows the output to be translated
    num_keywords=num_keywords,
    num_topics=num_topics,
    corpuses_to_compare=None,  # for TFIDF
    return_topics=False,  # to inspect topics rather than produce kws
    ignore_words=ignore_words,
    prompt_remove_words=True,  # check words with user
)
The BERT keywords are:

['time', 'flight', 'plane', 'southwestair', 'ticket', 'cancel', 'united', 'baggage',
'love', 'virginamerica', 'service', 'customer', 'delay', 'late', 'hour']

Are there words that should be removed [y/n]? y
Type or copy word(s) to be removed: southwestair, united, virginamerica

The new BERT keywords are:

['late', 'baggage', 'service', 'flight', 'time', 'love', 'book', 'customer',
'response', 'hold', 'hour', 'cancel', 'cancelled_flighted', 'delay', 'plane']

Are there words that should be removed [y/n]? n

The model will be re-ran until all words known to be unreasonable are removed for a suitable output. kwx.model.gen_files could also be used as a run-all function that produces a directory with a keyword text file and visuals (for experienced users wanting quick results).

Visuals

kwx.visuals includes functions for both presenting and analyzing the results of keyword extraction.

Topic Number Evaluation

A graph of topic coherence and overlap given a variable number of topics to derive keywords from.

from kwx.visuals import graph_topic_num_evals
import matplotlib.pyplot as plt

graph_topic_num_evals(
    method=["lda", "bert", "lda_bert"],
    text_corpus=text_corpus,
    input_language=input_language,
    num_keywords=num_keywords,
    topic_nums_to_compare=list(range(5, 15)),
    sample_size=1,
    metrics=True, # stability and coherence
    return_ideal_metrics=False, # selects ideal model given metrics for kwx.model.gen_files
)
plt.show()

pyLDAvis

pyLDAvis is included so that users can inspect LDA extracted topics, and further so that it can easily be generated for output files.

from kwx.visuals import pyLDAvis_topics

pyLDAvis_topics(
    method="lda",
    text_corpus=text_corpus,
    input_language=input_language,
    num_topics=10,
    display_ipython=False,  # For Jupyter integration
)

Word Cloud

Word clouds via wordcloud are included for a basic representation of the text corpus - specifically being a way to convey basic visual information to potential stakeholders. The following figure from examples.kw_extraction shows a word cloud generated from tweets of US air carrier passengers:

from kwx.visuals import gen_word_cloud

ignore_words = ["words", "user", "knows", "they", "don't", "want"]

gen_word_cloud(
    text_corpus=text_corpus,
    input_language=input_language,
    ignore_words=None,
    height=500,
)

t-SNE

t-SNE allows the user to visualize their topic distribution in both two and three dimensions. Currently available just for LDA, this technique provides another check for model suitability.

from kwx.visuals import t_sne
import matplotlib.pyplot as plt

t_sne(
    dimension="both",  # 2d and 3d are options
    text_corpus=text_corpus,
    num_topics=10,
    remove_3d_outliers=True,
)
plt.show()

To-Do

  • Including more methods to extract keywords, as well as improving the current ones
  • Adding BERT sentence-transformers language models as an argument in kwx.model.extract_kws
  • Splitting the keyword selection process from kwx.model.extract_kws into kwx.model.select_kws to allow for faster result iteration given user input
  • Allowing key phrase extraction
  • Adding t-SNE and pyLDAvis style visualizations for BERT models
  • Including more options to fine tune the cleaning process in kwx.utils
  • Updates to kwx.languages as lemmatization and other linguistic package dependencies evolve
  • Creating, improving and sharing examples
  • Improving tests for greater code coverage
  • Updating and refining the documentation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

kwx-0.1.2-py3-none-any.whl (244.2 kB view details)

Uploaded Python 3

File details

Details for the file kwx-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: kwx-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 244.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/52.0.0.post20210125 requests-toolbelt/0.9.1 tqdm/4.56.1 CPython/3.7.9

File hashes

Hashes for kwx-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 275dc8b2a43ecae20d2996a79be14f3cd815d9ef1bab696a0947c19078bfa97c
MD5 ab9e613bc64f6006d1cf8eec151e6e6f
BLAKE2b-256 e65fb7d2b224c66aa9cbe87225267fd5a67c0316eca7fff3776c5d47b02e1d35

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page