Skip to main content

Categorical BM25 search engine using pure category theory

Project description

Vajra BM25

Python 3.8+ License: MIT

Vajra (Sanskrit: वज्र, "thunderbolt/diamond") is a BM25 search engine built on pure category theory.

What Makes Vajra Different

Vajra implements the standard BM25 ranking algorithm using rigorous mathematical abstractions:

  • Morphisms: BM25 scoring as a mathematical arrow (Query, Document) → ℝ
  • Coalgebras: Search as state unfolding QueryState → List[SearchResult]
  • Functors: The List functor captures multiple-results semantics

The same math, different vocabulary. The core BM25 formula is identical to other implementations—category theory provides the organizational structure, not runtime magic.

Installation

# Basic installation (zero dependencies)
pip install vajra-bm25

# With optimizations (NumPy + SciPy for vectorized operations)
pip install vajra-bm25[optimized]

# With index persistence (save/load indices)
pip install vajra-bm25[persistence]

# Everything
pip install vajra-bm25[all]

Quick Start

from vajra_bm25 import VajraSearch, Document, DocumentCorpus

# Create documents
documents = [
    Document(id="1", title="Category Theory", content="Functors preserve structure"),
    Document(id="2", title="Coalgebras", content="Coalgebras model dynamics"),
    Document(id="3", title="Search Algorithms", content="BFS explores level by level"),
]
corpus = DocumentCorpus(documents)

# Create search engine
engine = VajraSearch(corpus)

# Search
results = engine.search("category functors", top_k=5)

for r in results:
    print(f"{r.rank}. {r.document.title} (score: {r.score:.3f})")

Optimized Usage

For larger corpora (1000+ documents), use the optimized version:

from vajra_bm25 import VajraSearchOptimized, DocumentCorpus

# Load corpus from JSONL
corpus = DocumentCorpus.load_jsonl("corpus.jsonl")

# Create optimized engine
# Automatically uses sparse matrices for >10K documents
engine = VajraSearchOptimized(corpus)

# Search (vectorized, cached)
results = engine.search("neural networks", top_k=10)

Parallel Batch Processing

For high-throughput scenarios:

from vajra_bm25 import VajraSearchParallel

engine = VajraSearchParallel(corpus, max_workers=4)

# Process multiple queries in parallel
queries = ["machine learning", "deep learning", "neural networks"]
batch_results = engine.search_batch(queries, top_k=5)

Performance

At 100,000 documents:

Implementation Query Latency Recall@10
rank-bm25 133.54 ms baseline
Vajra (base) 59.14 ms 65.0%
Vajra (optimized) 1.39 ms 66.5%

Vajra (optimized) achieves 96x speedup over rank-bm25 through:

  • Vectorized NumPy operations
  • Pre-computed IDF values
  • Sparse matrix representation
  • LRU query caching
  • Partial sort for top-k

JSONL Format

Vajra uses JSONL for corpus persistence:

{"id": "doc1", "title": "First Document", "content": "Content here"}
{"id": "doc2", "title": "Second Document", "content": "More content"}

Load and save:

# Save
corpus.save_jsonl("corpus.jsonl")

# Load
corpus = DocumentCorpus.load_jsonl("corpus.jsonl")

BM25 Parameters

from vajra_bm25 import VajraSearch, BM25Parameters

# Custom BM25 parameters
params = BM25Parameters(
    k1=1.5,  # Term frequency saturation (default: 1.5)
    b=0.75   # Length normalization (default: 0.75)
)

engine = VajraSearch(corpus, params=params)

Categorical Abstractions (Advanced)

For users interested in the category theory foundations:

from vajra_bm25 import (
    Morphism, FunctionMorphism, IdentityMorphism,
    Coalgebra, SearchCoalgebra,
    Functor, ListFunctor,
)

# Morphism composition
f = FunctionMorphism(lambda x: x + 1)
g = FunctionMorphism(lambda x: x * 2)
h = f >> g  # h(x) = (x + 1) * 2

# Identity laws
identity = IdentityMorphism()
assert (f >> identity).apply(5) == f.apply(5)  # f . id = f
assert (identity >> f).apply(5) == f.apply(5)  # id . f = f

API Reference

Core Classes

  • Document(id, title, content, metadata=None) - Immutable document
  • DocumentCorpus(documents) - Collection of documents
  • VajraSearch(corpus, params=None) - Base search engine
  • VajraSearchOptimized(corpus, k1=1.5, b=0.75) - Vectorized search
  • VajraSearchParallel(corpus, max_workers=4) - Parallel batch search

Search Results

@dataclass
class SearchResult:
    document: Document  # The matched document
    score: float        # BM25 relevance score
    rank: int           # Position in results (1-indexed)

Why Category Theory?

Category theory provides:

  1. Unified abstractions - Same Coalgebra.structure_map() interface for graph search and document retrieval
  2. Explicit type signatures - BM25: (Query, Document) → ℝ makes inputs/outputs clear
  3. Composable pipelines - preprocess >> score >> rank as morphism composition

What it doesn't provide:

  • Performance improvements (those come from NumPy/sparse matrices)
  • Novel algorithms (BM25 is BM25)
  • Runtime machinery (it's just well-organized code)

The honest summary: category theory is a design vocabulary, not a runtime mechanism.

Development

# Clone repository
git clone https://github.com/aiexplorations/vajra_bm25.git
cd vajra_bm25

# Install in development mode
pip install -e ".[dev]"

# Run tests
pytest tests/ -v

# Run with coverage
pytest --cov=vajra_bm25 --cov-report=html

License

MIT License - see LICENSE for details.

Acknowledgments

  • BM25 algorithm: Robertson & Zaragoza, "The Probabilistic Relevance Framework"
  • Category theory foundations: Rutten, "Universal Coalgebra: A Theory of Systems"
  • Inspired by the State Dynamic Modeling project

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vajra_bm25-0.1.0.tar.gz (29.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vajra_bm25-0.1.0-py3-none-any.whl (30.4 kB view details)

Uploaded Python 3

File details

Details for the file vajra_bm25-0.1.0.tar.gz.

File metadata

  • Download URL: vajra_bm25-0.1.0.tar.gz
  • Upload date:
  • Size: 29.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for vajra_bm25-0.1.0.tar.gz
Algorithm Hash digest
SHA256 6714e0d566be60990da5327ec1ba55e928652650780a03af73fe53371b40ed4d
MD5 a9eb9d287cd786b9e93dafb9902bacd8
BLAKE2b-256 bfca6a33179bbe3ec7b02d55f6d5cf781af641d3fabe6ae9ea100c6a6f558d2c

See more details on using hashes here.

File details

Details for the file vajra_bm25-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: vajra_bm25-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 30.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for vajra_bm25-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bb59c93a9618b86dc6cad6ba6cbda5e8430c0506d1fd373ded59481bddf2b5be
MD5 eaa6b0f5e0f6b0c07db2b3d206e84c99
BLAKE2b-256 9b23604f059a82549be5941fb3bf2ae783693cac199bdb559cbe14627ad78c24

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page