Categorical BM25 search engine using pure category theory
Project description
Vajra BM25
Vajra (Sanskrit: वज्र, "thunderbolt") is a high-performance BM25 search engine. It uses Category Theory abstractions to reframe the BM25 algorithm, providing a well-structured API with vectorized implementations. Benchmarks show Vajra is faster than BM25S (one of the fastest Python BM25 libraries) at larger corpus sizes while achieving better recall on certain datasets.
What Makes Vajra Different
Vajra implements the standard BM25 ranking algorithm using rigorous mathematical abstractions:
- Morphisms: BM25 scoring as a mathematical arrow
(Query, Document) → ℝ - Coalgebras: Search as state unfolding
QueryState → List[SearchResult] - Functors: The List functor captures multiple-results semantics
While Vajra BM25 uses the same underlying mathematics of BM25, it uses different vocabulary to describe the search process, and the abstractions are more amenable to experimentation and improvement. The core BM25 formula is identical to other implementations—category theory provides the organizational structure.
Installation
# Basic installation (zero dependencies)
pip install vajra-bm25
# With optimizations (NumPy + SciPy for vectorized operations)
pip install vajra-bm25[optimized]
# With index persistence (save/load indices)
pip install vajra-bm25[persistence]
# Everything
pip install vajra-bm25[all]
Quick Start
The Python API for using Vajra BM25 is quite straightforward, and there's currently support for using JSONL document corpuses via the DocumentCorpus class.
from vajra_bm25 import VajraSearch, Document, DocumentCorpus
# Create documents
documents = [
Document(id="1", title="Category Theory", content="Functors preserve structure"),
Document(id="2", title="Coalgebras", content="Coalgebras model dynamics"),
Document(id="3", title="Search Algorithms", content="BFS explores level by level"),
]
corpus = DocumentCorpus(documents)
# Create search engine
engine = VajraSearch(corpus)
# Search
results = engine.search("category functors", top_k=5)
for r in results:
print(f"{r.rank}. {r.document.title} (score: {r.score:.3f})")
Optimized Usage
For larger corpora (1000+ documents), use the optimized version. This optimized version is much faster.
from vajra_bm25 import VajraSearchOptimized, DocumentCorpus
# Load corpus from JSONL
corpus = DocumentCorpus.load_jsonl("corpus.jsonl")
# Create optimized engine
# Automatically uses sparse matrices for >10K documents
engine = VajraSearchOptimized(corpus)
# Search (vectorized, cached)
results = engine.search("neural networks", top_k=10)
Parallel Batch Processing
For high-throughput scenarios, use the parallel batch processing version. This version is much faster and able to return results for multiple queries in parallel. There's obviously the overhead due to parallelism, which may work against the search algorithm, but in cases where we have memory limitations, this may work better than Vajra Search Optimized.
from vajra_bm25 import VajraSearchParallel
engine = VajraSearchParallel(corpus, max_workers=4)
# Process multiple queries in parallel
queries = ["machine learning", "deep learning", "neural networks"]
batch_results = engine.search_batch(queries, top_k=5)
Performance
Benchmarked on synthetic corpora with 10 queries per run, comparing against rank-bm25 and BM25S:
Speed Comparison
| Corpus Size | rank-bm25 (ms) | Vajra Optimized (ms) | Vajra Parallel (ms) | BM25S (ms) |
|---|---|---|---|---|
| 1,000 | 0.46 | 0.05 | 0.01 | 0.06 |
| 10,000 | 7.64 | 0.15 | 0.06 | 0.14 |
| 50,000 | 40.28 | 0.33 | 0.24 | 0.36 |
| 100,000 | 79.62 | 0.34 | 0.26 | 0.47 |
Speedup vs rank-bm25
| Corpus Size | Vajra Optimized | Vajra Parallel | BM25S |
|---|---|---|---|
| 1,000 | 10x | 31x | 8x |
| 10,000 | 52x | 119x | 54x |
| 50,000 | 122x | 167x | 113x |
| 100,000 | 234x | 307x | 168x |
Recall@10 (vs rank-bm25 baseline)
| Corpus Size | Vajra Optimized | Vajra Parallel | BM25S |
|---|---|---|---|
| 1,000 | 99% | 99% | 98% |
| 10,000 | 56% | 56% | 56% |
| 50,000 | 80% | 80% | 56% |
| 100,000 | 50% | 50% | 50% |
Key observations:
- Sub-millisecond query latency at all corpus sizes
- Up to 307x speedup over rank-bm25 at 100K documents with Vajra Parallel
- Vajra is faster than BM25S at all corpus sizes
- Vajra achieves better recall at 50K docs (80% vs 56%)
- Recall varies by corpus characteristics (vocabulary overlap, document length distribution)
Vajra achieves these speedups through:
- Vectorized NumPy operations
- Pre-computed IDF values
- Sparse matrix representation
- LRU query caching
- Partial sort for top-k
- Thread pool parallelism (VajraSearchParallel)
For detailed benchmark methodology and results, see docs/benchmarks.md.
JSONL Format
Vajra uses JSONL for corpus persistence:
{"id": "doc1", "title": "First Document", "content": "Content here"}
{"id": "doc2", "title": "Second Document", "content": "More content"}
Load and save:
# Save
corpus.save_jsonl("corpus.jsonl")
# Load
corpus = DocumentCorpus.load_jsonl("corpus.jsonl")
BM25 Parameters
from vajra_bm25 import VajraSearch, BM25Parameters
# Custom BM25 parameters
params = BM25Parameters(
k1=1.5, # Term frequency saturation (default: 1.5)
b=0.75 # Length normalization (default: 0.75)
)
engine = VajraSearch(corpus, params=params)
Categorical Abstractions (Advanced)
For users interested in the category theory foundations:
from vajra_bm25 import (
Morphism, FunctionMorphism, IdentityMorphism,
Coalgebra, SearchCoalgebra,
Functor, ListFunctor,
)
# Morphism composition
f = FunctionMorphism(lambda x: x + 1)
g = FunctionMorphism(lambda x: x * 2)
h = f >> g # h(x) = (x + 1) * 2
# Identity laws
identity = IdentityMorphism()
assert (f >> identity).apply(5) == f.apply(5) # f . id = f
assert (identity >> f).apply(5) == f.apply(5) # id . f = f
There's a better, more rigorous treatment of the concepts of Category Theory by Bartosz Milewski here.
API Reference
Core Classes
Document(id, title, content, metadata=None)- Immutable documentDocumentCorpus(documents)- Collection of documentsVajraSearch(corpus, params=None)- Base search engineVajraSearchOptimized(corpus, k1=1.5, b=0.75)- Vectorized searchVajraSearchParallel(corpus, max_workers=4)- Parallel batch search
Search Results
@dataclass
class SearchResult:
document: Document # The matched document
score: float # BM25 relevance score
rank: int # Position in results (1-indexed)
Why Category Theory?
Category theory provides:
- Unified abstractions - Same
Coalgebra.structure_map()interface for graph search and document retrieval - Explicit type signatures -
BM25: (Query, Document) → ℝmakes inputs/outputs clear - Composable pipelines -
preprocess >> score >> rankas morphism composition
What it doesn't provide:
- Performance improvements (those come from NumPy/sparse matrices)
- Novel algorithms (BM25 is BM25)
- Runtime machinery (it's just well-organized code)
The honest summary: category theory is a design vocabulary, not a runtime mechanism.
Development
# Clone repository
git clone https://github.com/aiexplorations/vajra_bm25.git
cd vajra_bm25
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest tests/ -v
# Run with coverage
pytest --cov=vajra_bm25 --cov-report=html
License
MIT License - see LICENSE for details.
Acknowledgments
- BM25 algorithm: Robertson & Zaragoza, "The Probabilistic Relevance Framework"
- Category theory foundations: Rutten, "Universal Coalgebra: A Theory of Systems"
- Built and explored in the State Dynamic Modeling project
- Inspired by the Category Theory lectures by Bartosz Milewski which are here on YouTube.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vajra_bm25-0.2.0.tar.gz.
File metadata
- Download URL: vajra_bm25-0.2.0.tar.gz
- Upload date:
- Size: 32.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f069e06d1eb0e76ee341a88adeea5c3486cf78e19c1b95579f8596d4c7b34727
|
|
| MD5 |
b8157d6274fc2b45253ce5fe4fc5ab53
|
|
| BLAKE2b-256 |
7445d48ba64e201846add594fba82320daaae6fe8801ea938c50d4b2c5a4198e
|
File details
Details for the file vajra_bm25-0.2.0-py3-none-any.whl.
File metadata
- Download URL: vajra_bm25-0.2.0-py3-none-any.whl
- Upload date:
- Size: 31.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4a2b4f96fb063435fdf3a47333f6e5d97a1d846e6e42c5cc1f22615db7ad03f5
|
|
| MD5 |
f2a6474c5cf3f69a1c1f9d08a0917342
|
|
| BLAKE2b-256 |
66c2abf32f5ac12ba664f5f9a20f419b4518273e19eab55df43e76f42820da34
|