Haystack 2.x In-memory Document Store with Enhanced Efficiency
Project description
Better BM25 In-Memory Document Store
An in-memory document store is a great starting point for prototyping and debugging before migrating to production-grade stores like Elasticsearch. However, the original implementation of BM25 retrieval recreates an inverse index for the entire document store on every new search. Furthermore, the tokenization method is primitive, only permitting splitters based on regular expressions, making localization and domain adaptation challenging. Therefore, this implementation is a slight upgrade to the default BM25 in-memory document store by implementing incremental index update and incorporation of SentencePiece statistical sub-word tokenization.
Installation
$ pip install bbm25-haystack
Alternatively, you can clone the repository and build from source to be able to reflect changes to the source code:
$ git clone https://github.com/Guest400123064/bbm25-haystack.git
$ cd bbm25-haystack
$ pip install -e .
Usage
Quick Start
Below is an example of how you can build a minimal search engine with the bbm25_haystack
components on their own. They are also compatible with Haystack pipelines.
from haystack import Document
from bbm25_haystack import BetterBM25DocumentStore, BetterBM25Retriever
document_store = BetterBM25DocumentStore()
document_store.write_documents([
Document(content="There are over 7,000 languages spoken around the world today."),
Document(content="Elephants have been observed to behave in a way that indicates a high level of self-awareness, such as recognizing themselves in mirrors."),
Document(content="In certain parts of the world, like the Maldives, Puerto Rico, and San Diego, you can witness the phenomenon of bio-luminescent waves.")
])
retriever = BetterBM25Retriever(document_store)
retriever.run(query="How many languages are spoken around the world today?")
API References
You can find the full API references here. In a hurry? Below are some most important document store parameters you might want explore:
k, b, delta
- the three BM25+ hyperparameters.sp_file
- a path to a trained SentencePiece tokenizer.model
file. The default tokenizer is directly copied from LLaMA-2-7B-32K tokenizer with a vocab size of 32,000.n_grams
- default to 1, which means text (both query and document) are tokenized into unigrams. If set to 2, the tokenizer also augment the list of uni-grams with bi-grams, and so on. If specified as tuple, e.g., (2, 3), the tokenizer only produce bi-grams and tri-grams, without any uni-gram.haystack_filter_logic
- see below.
The retriever parameters are largely the same as InMemoryBM25Retriever
.
Filtering Logic
The current document store uses document_matches_filter
shipped with Haystack to perform filtering by default, which is the same as InMemoryDocumentStore
.
However, there is also an alternative filtering logic shipped with this implementation (unstable at this point). To use this alternative logic, initialize the document store with haystack_filter_logic=False
. Please find comments and implementation details in filters.py
. TL;DR:
- Comparison with
None
, i.e., missing values, involved will always returnFalse
, no matter missing the document attribute value or missing the filter value. - Comparison with
pandas.DataFrame
is always prohibited to reduce surprises. - No implicit
datetime
conversion from string values. in
andnot in
allows anyIterable
as filter value, without thelist
constraint.
In this case, the negation logic needs to be considered again because False
can now issue from both input nullity check and the actual comparisons. For instance, in
and not in
both yield non-matching upon missing values. But I think having input processing and comparisons separated makes the filtering behavior more transparent.
Search Quality Evaluation
This repo has a simple script to help evaluate the search quality over BEIR benchmark. You need to clone the repository (you can also manually download the script and place it under a folder named scripts
) and you have to install additional dependencies to run the script.
$ pip install beir
To run the script, you may want to specify the dataset name and BM25 hyperparameters. For example:
$ python scripts/benchmark_beir.py --datasets scifact arguana --bm25-k1 1.2 --n-grams 2 --output eval.csv
It automatically downloads the benchmarking dataset to benchmarks/beir
, where benchmarks
is at the same level as scripts
. You may also check the help page for more information.
$ python scripts/benchmark_beir.py --help
New benchmarking scripts are expected to be added in the future.
License
bbm25-haystack
is distributed under the terms of the Apache-2.0 license.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file bbm25_haystack-0.2.1.tar.gz
.
File metadata
- Download URL: bbm25_haystack-0.2.1.tar.gz
- Upload date:
- Size: 377.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: python-httpx/0.27.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b8384deeb061976310792580070b913cf78d9260fba51056c5343bac5520c483 |
|
MD5 | f586f1a9e286139147051cb69e7a71ba |
|
BLAKE2b-256 | 72792ece41428b9043be036471d72a4cf59467eb4c82f98c045e3c23ef64b9d9 |
File details
Details for the file bbm25_haystack-0.2.1-py3-none-any.whl
.
File metadata
- Download URL: bbm25_haystack-0.2.1-py3-none-any.whl
- Upload date:
- Size: 241.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: python-httpx/0.27.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6339a538a937c8859058b29a0cb738f9e4130c49c5dc8e15e3259717a4fbc7e3 |
|
MD5 | 5efa08a1c02b658f8430f03b7fe2e703 |
|
BLAKE2b-256 | dc34007183c77d4570f9ef1c0660238831acda6adcab1b084bbca3d997c81ef0 |