InvertedIndex implementation using hash lists (dictionaries)
Project description
Fast and simple InvertedIndex implementation using hash lists (python dictionaries).
Supports Python 3.5+
Free software: BSD license
Installing
The easiest way to install hashindex is through pypi
pip install hashedindex
Features
hashedindex provides a simple to use inverted index structure that is flexible enough to work with all kinds of use cases.
Basic Usage:
import hashedindex
index = hashedindex.HashedIndex()
index.add_term_occurrence('hello', 'document1.txt')
index.add_term_occurrence('world', 'document1.txt')
index.get_documents('hello')
Counter({'document1.txt': 1})
index.items()
{'hello': Counter({'document1.txt': 1}),
'world': Counter({'document1.txt': 1})}
example = 'The Quick Brown Fox Jumps Over The Lazy Dog'
for term in example.split():
index.add_term_occurrence(term, 'document2.txt')
hashedindex is not limited to strings, any hashable object can be indexed.
index.add_term_occurrence('foo', 10)
index.add_term_occurrence(('fire', 'fox'), 90.2)
index.items()
{'foo': Counter({10: 1}), ('fire', 'fox'): Counter({90.2: 1})}
Text Parsing
The hashedindex module comes included with a powerful textparser module with methods to split text into tokens.
from hashedindex import textparser
list(textparser.word_tokenize("hello cruel world"))
[('hello',), ('cruel',), ('world',)]
Tokens are wrapped within tuples due to the ability to specify any number of n-grams required:
list(textparser.word_tokenize("Life is about making an impact, not making an income.", ngrams=2))
[(u'life', u'is'), (u'is', u'about'), (u'about', u'making'), (u'making', u'an'), (u'an', u'impact'),
(u'impact', u'not'), (u'not', u'making'), (u'making', u'an'), (u'an', u'income')]
Take a look at the function’s docstring for information on how to use stopwords, specify a min_length for tokens, and configure token output using the ignore_numeric, retain_casing and retain_punctuation parameters.
By default, word_tokenize omits whitespace from the output token stream; whitespaces are rarely useful to include in a document term index.
If you need to tokenize text and re-assemble an output with spacing that matches the input, you may enable the tokenize_whitespace setting.
list(textparser.word_tokenize('Conventions. May. Differ.', tokenize_whitespace=True))
[('conventions',), (' ',), ('may',), (' ',), ('differ',)]
Stemming
When building an inverted index, it can be useful to resolve related strings to a common root.
For example, in a corpus relating to animals it might be useful to derive a singular noun for each animal; as a result, documents containing either the word dog or dogs could be found under the index entry dog.
The hashedindex module’s text parser provides optional support for stemming by allowing the caller to specify a custom stemmer:
class NaivePluralStemmer():
def stem(self, x):
return x.rstrip('s')
list(textparser.word_tokenize('It was raining cats and dogs', stemmer=NaivePluralStemmer()))
[('it',), ('wa',), ('raining',), ('cat',), ('and',), ('dog',)]
Integration with Numpy and Pandas
The idea behind hashedindex is to provide a really quick and easy way of generating matrices for machine learning with the additional use of numpy, pandas and scikit-learn. For example:
from hashedindex import textparser
import hashedindex
import numpy as np
index = hashedindex.HashedIndex()
documents = ['spam1.txt', 'ham1.txt', 'spam2.txt']
for doc in documents:
with open(doc, 'r') as fp:
for term in textparser.word_tokenize(fp.read()):
index.add_term_occurrence(term, doc)
# You *probably* want to use scipy.sparse.csr_matrix for better performance
X = np.as_matrix(index.generate_feature_matrix(mode='tfidf'))
y = []
for doc in index.documents():
y.append(1 if 'spam' in doc else 0)
y = np.asarray(doc)
from sklearn.svm import SVC
classifier = SVC(kernel='linear')
classifier.fit(X, y)
You can also extend your feature matrix to a more verbose pandas DataFrame:
import pandas as pd
X = index.generate_feature_matrix(mode='tfidf')
df = pd.DataFrame(X, columns=index.terms(), index=index.documents())
All methods within the code have high test coverage so you can be sure everything works as expected.
Reporting Bugs
Found a bug? Nice, a bug found is a bug fixed. Open an Issue or better yet, open a pull request.
History
0.10.0 (2020-10-19)
add count optional parameter to add_term_occurrence method (@jayadison)
0.9.0 (2020-07-14)
support non-ascii characters during tokenization (@jayadison)
0.8.0 (2019-05-08)
Add option to retain punctuation in word_tokenize (@jayadison)
Add option to include whitespace tokens in word_tokenize results (@jayadison)
0.7.1 (2019-04-30)
Fix minor issue in history changelog
0.7.0 (2019-04-30)
Add support for retaining token casing in word_tokenize (Thanks @jayadison)
0.6.0 (2019-12-11)
Add support for running stemming operations with word_tokenize (Thanks @jayaddison)
Add official support for python 3.8
0.5.0 (2019-07-21)
Drop support for python 2.7 and 3.4
0.1.0 (2015-01-11)
First release on PyPI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file hashedindex-0.10.0.tar.gz
.
File metadata
- Download URL: hashedindex-0.10.0.tar.gz
- Upload date:
- Size: 24.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.24.0 setuptools/44.1.1 requests-toolbelt/0.8.0 tqdm/4.43.0 CPython/3.8.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cd44b900524976168f256323e0f9d2e7178cb48e9d6789ee0a79cc651988696e |
|
MD5 | dca3c67c9b0e82eeac7f97c0fd39f5df |
|
BLAKE2b-256 | ab8a3a20f889d6cb7cf05c327a536f6a021023eb43ca527376cf24c87546cb39 |
File details
Details for the file hashedindex-0.10.0-py2.py3-none-any.whl
.
File metadata
- Download URL: hashedindex-0.10.0-py2.py3-none-any.whl
- Upload date:
- Size: 9.3 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.24.0 setuptools/44.1.1 requests-toolbelt/0.8.0 tqdm/4.43.0 CPython/3.8.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c03de1886f3f883d72579049cc8f00dc09c94aedc3ce176145e9558a6f48e607 |
|
MD5 | c2be11afe2093acf4aacfe733f898ca9 |
|
BLAKE2b-256 | 9b5b2dc35f7f451f2ae7b8f4b42e3cfbd211faa8966946e3e7d8d4d0b3c9f40e |