Skip to main content

Fast keyword identification with n-gram vector string matching.

Project description

Fast Keyword Identification

Fast keyword identification with n-gram vector string matching.


Overview

This package provides a generic pipeline for fuzzy-identification of keywords in large document collections. For example, if you wish to find all occurrences of the keyword "Walmart" in a large document collection, but expect some typos or variations in spelling, this module will allow you to quickly identify all matches. The matcher is based on a character n-gram vector model rather than the slower string edit distance. The module is originally intended for brand monitoring applications.


Installation

pip install fast-keywords

CLI

python -m fast_keywords --help

Usage

python -m fast_keywords -k keywords.txt -c corpus.csv -l english -b 0.75

Training Models for Additional Filtering

While the main script will search for keywords in the provided corpus, filtering according to the match confidence, you can also train and use simple text classifiers as an additional filter to remove dubious matches. For example, if you are searching for the company "apple," but find your searches frequently return references to fruit, you can train a model which will exclude those matches based on the surrounding text of matched keywords. Instructions for model training and usage are provided below.

  1. After searching for keywords you will find a column "Match is Invalid" in the output.xlsx file.

  2. Modify this column, changing matches which should be filtered out to "1".

  3. Train a new model using the --train flag, providing the modified output.xlsx file and the original keywords file, as in the command below.

    1. python -m fast_keywords --train -d output.xlsx -k keywords.txt
      
  4. The train command will create a directory with several model.pb files which you can distribute and use for filtering. You should use the absolute path to this containing directory as the model path passed with the -m flag.

  5. You can use your models when predicting as in the below command. You can also pass previously-trained models using the -m flag to continue training on new data when running the train command.

python -m fast_keywords -k keywords.txt -c corpus.csv -l english -b 0.75 -m model.pb

Notes

  • Your input .csv must have a "text" column containing documents.
  • The main script will create a a file output.xlsx summarizing identified keywords and their metadata.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fast-keywords-0.1.0.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fast_keywords-0.1.0-py3-none-any.whl (2.3 kB view details)

Uploaded Python 3

File details

Details for the file fast-keywords-0.1.0.tar.gz.

File metadata

  • Download URL: fast-keywords-0.1.0.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.5.0.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.7.4

File hashes

Hashes for fast-keywords-0.1.0.tar.gz
Algorithm Hash digest
SHA256 20f008c626921b300304a6b02254c87d4c2e05406fc0d29c77e4eff6f3ec1a17
MD5 2ab61db329d624a9cbb2263d55caff85
BLAKE2b-256 c833b54495ea1fab3004246ea76a469ebff9e295acd4854f855d39b6e3cc470e

See more details on using hashes here.

File details

Details for the file fast_keywords-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: fast_keywords-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 2.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.5.0.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.7.4

File hashes

Hashes for fast_keywords-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c2f221a7e0e68972cd4a82faa42300b8ed2d032c79b8a3fee650b73aafaa7354
MD5 1fd5226cc221a9e5820bb031ed6f0c43
BLAKE2b-256 fa991ad2a8cd1aea997821284853bd9856ac96f4ad889fcfdefc4daa0d9a6424

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page