Fast keyword identification with n-gram vector string matching.
Project description
Fast Keyword Identification
Fast keyword identification with n-gram vector string matching.
Overview
This package provides a generic pipeline for fuzzy-identification of keywords in large document collections. For example, if you wish to find all occurrences of the keyword "Walmart" in a large document collection, but expect some typos or variations in spelling, this module will allow you to quickly identify all matches. The matcher is based on a character n-gram vector model rather than the slower string edit distance. The module is originally intended for brand monitoring applications.
Installation
pip install fast-keywords
CLI
python -m fast_keywords --help
Usage
python -m fast_keywords -k keywords.txt -c corpus.csv -l english -b 0.75
Training Models for Additional Filtering
While the main script will search for keywords in the provided corpus, filtering according to the match confidence, you can also train and use simple text classifiers as an additional filter to remove dubious matches. For example, if you are searching for the company "apple," but find your searches frequently return references to fruit, you can train a model which will exclude those matches based on the surrounding text of matched keywords. Instructions for model training and usage are provided below.
-
After searching for keywords you will find a column "Match is Invalid" in the
output.xlsx
file. -
Modify this column, changing matches which should be filtered out to "1".
-
Train a new model using the
--train
flag, providing the modifiedoutput.xlsx
file and the original keywords file, as in the command below.-
python -m fast_keywords --train -d output.xlsx -k keywords.txt
-
-
The train command will create a directory with several
model.pb
files which you can distribute and use for filtering. You should use the absolute path to this containing directory as the model path passed with the-m
flag. -
You can use your models when predicting as in the below command. You can also pass previously-trained models using the
-m
flag to continue training on new data when running the train command.
python -m fast_keywords -k keywords.txt -c corpus.csv -l english -b 0.75 -m model.pb
Notes
- Your input .csv must have a "text" column containing documents.
- The main script will create a a file
output.xlsx
summarizing identified keywords and their metadata.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for fast_keywords-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c2f221a7e0e68972cd4a82faa42300b8ed2d032c79b8a3fee650b73aafaa7354 |
|
MD5 | 1fd5226cc221a9e5820bb031ed6f0c43 |
|
BLAKE2b-256 | fa991ad2a8cd1aea997821284853bd9856ac96f4ad889fcfdefc4daa0d9a6424 |