Skip to main content

Fixed Dimensional Encodings for multi-vector retrieval (MUVERA) — Python port of Google's graph-mining implementation

Project description

pymuvera — MUVERA + EGGROLL: Fixed Dimensional Encodings for Multi-Vector Retrieval

Sub-linear ANN retrieval for ColBERT, ColPali, and ColQwen2.

PyPI Python CI License

A pure-Python port of Google's graph-mining MUVERA implementation, extended with low-rank SimHash factorisation inspired by the EGGROLL paper (Sarkar et al., 2025).

Reference
MUVERA paper Dhulipala et al., 2024
EGGROLL paper Sarkar et al., 2025
Original C++ implementation google/graph-mining

What this library adds beyond the original paper

The MUVERA paper uses a full-rank Gaussian matrix for SimHash partitioning — a random (d × k) draw every time. This library adds LOW_RANK_GAUSSIAN, a new projection mode that factors the SimHash matrix as AB⊤ (where A ∈ ℝ^{d×r}, B ∈ ℝ^{k×r}, r ≪ k), cutting partition compute from O(N·d·k) to O(N·d·r + N·r·k).

The theoretical backing comes from EGGROLL (Sarkar et al., 2025, Theorem 4): the low-rank sign pattern converges to the full-rank Gaussian sign pattern at O(r⁻¹) — faster than the standard CLT rate of O(r⁻¹/²) — because symmetry cancels all odd cumulants in the Edgeworth expansion. At r=4 with ColQwen2 (d=128, k=8) that is ~1.9× faster partition assignment with only ~25% variance increase.


What is MUVERA?

Late-interaction retrieval models like ColBERT, ColPali, and ColQwen2 represent each query and document as a variable-length set of token embeddings rather than a single vector. Scoring two sets requires the computationally expensive MaxSim (Chamfer Similarity) operation:

Chamfer(Q, D) = Σ_{q ∈ Q} max_{d ∈ D} cos(q, d)

This makes large-scale ANN retrieval impractical with standard indexes.

MUVERA solves this by converting each multi-vector set into a single fixed-dimensional vector (FDE) such that:

fde_query(Q) · fde_doc(D)  ≈  Chamfer(Q, D)

Standard ANN libraries (FAISS, ScaNN, OpenSearch k-NN) can then index FDE vectors directly, restoring sub-linear retrieval for late-interaction models.


Installation

pip install pymuvera

Requires Python ≥ 3.12, NumPy ≥ 1.24, Pydantic ≥ 2.0.


Quick start

import numpy as np
from muvera_fde import MUVERAEncoder

# One encoder instance for both queries and documents — seed must match
enc = MUVERAEncoder(
    dimension=128,              # ColBERT / ColQwen2 token embedding dimension
    num_simhash_projections=4,  # 2^4 = 16 partitions per repetition
    num_repetitions=2,          # 2 independent repetitions
    seed=42,
)

print(enc)
# MUVERAEncoder(dimension=128, num_simhash_projections=4, num_repetitions=2,
#               projection_type=DEFAULT_IDENTITY, fde_dimension=4096)

query_tokens = np.random.randn(32,  128).astype(np.float32)   # 32 query tokens
doc_tokens   = np.random.randn(512, 128).astype(np.float32)   # 512 document tokens

q_fde = enc.encode_query(query_tokens)    # shape: (4096,)
d_fde = enc.encode_document(doc_tokens)   # shape: (4096,)

# Approximate Chamfer Similarity — drop into any ANN index as a float32 vector
score = float(q_fde @ d_fde)

API reference

MUVERAEncoder

The primary entry point. Initialise once and reuse for all queries and documents — the random partition structure (SimHash matrices, Count Sketch parameters) must be identical on both sides.

MUVERAEncoder(
    dimension: int = 128,
    num_simhash_projections: int = 4,
    num_repetitions: int = 1,
    seed: int = 1,
    projection_type: ProjectionType = ProjectionType.DEFAULT_IDENTITY,
    projection_dimension: int | None = None,
    simhash_rank: int = 1,
    fill_empty_partitions: bool = False,
    final_projection_dimension: int | None = None,
)
Parameter Default Description
dimension 128 Token embedding dimension
num_simhash_projections 4 SimHash bits k; partitions = 2^k
num_repetitions 1 Independent repetitions (more → better approximation)
seed 1 Shared RNG seed — must match query and document sides
projection_type DEFAULT_IDENTITY DEFAULT_IDENTITY, AMS_SKETCH (Count Sketch on token embeddings), or LOW_RANK_GAUSSIAN (low-rank factored SimHash)
projection_dimension None Target dim after Count Sketch; required for AMS_SKETCH
simhash_rank 1 Rank r for LOW_RANK_GAUSSIAN; must satisfy 1 ≤ r < num_simhash_projections. r=4 is a practical sweet spot for ColQwen2 (d=128, k≥8)
fill_empty_partitions False Document side: fill empty slots via Hamming-nearest-neighbour
final_projection_dimension None Post-accumulation Count Sketch compression

Property: fde_dimension — output vector length.


Encoding single inputs

enc = MUVERAEncoder(dimension=128, num_simhash_projections=4, num_repetitions=2)

# Query: SUM aggregation — token embeddings summed into their SimHash partition
q_fde = enc.encode_query(query_tokens)    # (num_tokens, 128) → (fde_dim,)

# Document: AVERAGE aggregation — centroid of tokens per partition
d_fde = enc.encode_document(doc_tokens)   # (num_tokens, 128) → (fde_dim,)

# Both also accept flat 1-D input (num_tokens * dimension,)
q_fde = enc.encode_query(query_tokens.flatten())

Batch encoding

queries   = [np.random.randn(32,  128).astype(np.float32) for _ in range(100)]
documents = [np.random.randn(512, 128).astype(np.float32) for _ in range(1000)]

Q = enc.encode_queries_batch(queries)     # shape: (100,  fde_dimension)
D = enc.encode_documents_batch(documents) # shape: (1000, fde_dimension)

# All-pairs approximate Chamfer Similarities in one matmul
scores = Q @ D.T   # shape: (100, 1000)
top_k  = np.argsort(scores, axis=1)[:, ::-1][:, :10]  # top-10 per query

Reducing FDE size

Two orthogonal compression knobs:

Option A — per-partition Count Sketch (reduces width before accumulation):

from muvera_fde import ProjectionType

enc = MUVERAEncoder(
    dimension=128,
    num_simhash_projections=4,
    num_repetitions=4,
    projection_type=ProjectionType.AMS_SKETCH,
    projection_dimension=32,   # 128 → 32 per partition slot
)
# fde_dimension = 4 reps × 16 partitions × 32 = 2048  (vs 8192 without)

Option B — post-accumulation Count Sketch (compresses the final vector):

enc = MUVERAEncoder(
    dimension=128,
    num_simhash_projections=4,
    num_repetitions=4,
    final_projection_dimension=512,   # 8192 → 512
)
# fde_dimension = 512

Both preserve dot products in expectation: E[⟨sketch(x), sketch(y)⟩] = ⟨x, y⟩.


Low-rank SimHash — faster partition assignment (EGGROLL)

Replaces the full (d × k) SimHash matrix with two smaller factors A ∈ ℝ^{d×r} and B ∈ ℝ^{k×r}, so the partition cost drops from O(N × d × k) to O(N × d × r + N × r × k).

from muvera_fde import ProjectionType

enc = MUVERAEncoder(
    dimension=128,
    num_simhash_projections=8,   # 2^8 = 256 partitions
    num_repetitions=4,
    projection_type=ProjectionType.LOW_RANK_GAUSSIAN,
    simhash_rank=4,              # r=4; cost: O(N×128×4 + N×4×8) = O(544N) vs O(1024N)
    seed=42,
)
# fde_dimension = 4 × 256 × 128 = 131072 (same formula as DEFAULT_IDENTITY)

q_fde = enc.encode_query(query_tokens)
d_fde = enc.encode_document(doc_tokens)
score = float(q_fde @ d_fde)

Convergence guarantee (EGGROLL, Sarkar et al. 2025, Theorem 4): the low-rank sign pattern converges to the full-rank Gaussian sign pattern at rate O(r⁻¹) — faster than the standard CLT rate O(r⁻¹/²) because symmetry cancels all odd cumulants in the Edgeworth expansion.

Practical targets for ColQwen2 (d=128):

simhash_rank Variance vs full-rank SimHash cost vs full-rank (k=8)
1 ~100% baseline 136N vs 1024N — 7.5× faster
4 ~25% increase 544N vs 1024N — 1.9× faster
8 ~12% increase 1088N vs 1024N — breakeven

Note: Sign assignments are scale-invariant (sign(αx) = sign(x)), so the 1/√r normalisation common in low-rank approximations is omitted — it has no effect on partition assignments.


Filling empty partition slots

With few document tokens and many partitions (large k), many slots will be empty (all-zero). Enabling fill_empty_partitions copies the projection of the nearest token by SimHash Hamming distance into each empty slot, improving recall for short documents:

enc = MUVERAEncoder(
    dimension=128,
    num_simhash_projections=4,
    num_repetitions=2,
    fill_empty_partitions=True,   # document side only; queries ignore this flag
)

short_doc_tokens = np.random.randn(8, 128).astype(np.float32)
d_fde = enc.encode_document(short_doc_tokens)   # no all-zero partition blocks

Low-level functional API

Bypass the encoder class entirely when you need to manage parameters manually (e.g. distributed indexing where workers share pre-built parameters):

from muvera_fde import FDEConfig, generate_query_fde, generate_document_fde

config = FDEConfig(
    dimension=128,
    num_repetitions=2,
    num_simhash_projections=4,
    seed=42,
)

q_fde = generate_query_fde(query_tokens, config)
d_fde = generate_document_fde(doc_tokens, config)

# Pass pre-built RepParams to skip RNG sampling on every call
enc = MUVERAEncoder(dimension=128, num_repetitions=2, num_simhash_projections=4, seed=42)
q_fde = generate_query_fde(query_tokens, config, enc._rep_params)

FDEConfig serialization

FDEConfig is a frozen Pydantic model — save it alongside your ANN index so the encoder configuration is always recoverable:

import json
from muvera_fde import FDEConfig

config = FDEConfig(dimension=128, num_repetitions=4, num_simhash_projections=4, seed=42)

# Save
with open("fde_config.json", "w") as f:
    json.dump(config.model_dump(), f)

# Load
with open("fde_config.json") as f:
    config2 = FDEConfig(**json.load(f))

assert config == config2

Two-stage retrieval pipeline

The intended production pattern for ColQwen2 / ColBERT:

Offline:
  doc token embeddings  →  encode_document()  →  FDE vector  →  ANN index

Online:
  query token embeddings  →  encode_query()  →  FDE vector
                                                     │
                                              ANN search (fast, sub-linear)
                                                     │
                                            top-K candidate docs
                                                     │
                                       MaxSim re-rank on raw token embeddings
                                                     │
                                               final top-K results

Stage 1 (ANN on FDE vectors) eliminates 99%+ of the corpus cheaply. Stage 2 (exact MaxSim on raw token embeddings) reranks the small candidate set for full accuracy.

Minimal FAISS integration

import faiss
import numpy as np
from muvera_fde import MUVERAEncoder

enc = MUVERAEncoder(dimension=128, num_simhash_projections=4, num_repetitions=2, seed=42)
dim = enc.fde_dimension  # 4096

# Build index
index = faiss.IndexFlatIP(dim)   # inner product ≈ Chamfer Similarity

# Index documents (offline)
doc_embeddings = [...]   # list of (num_tokens, 128) float32 arrays
D = enc.encode_documents_batch(doc_embeddings)   # (N, 4096)
faiss.normalize_L2(D)
index.add(D)

# Query (online)
query_tokens = np.random.randn(32, 128).astype(np.float32)
q_fde = enc.encode_query(query_tokens).reshape(1, -1)
faiss.normalize_L2(q_fde)

_, candidate_ids = index.search(q_fde, k=100)   # stage 1: fast ANN
# stage 2: MaxSim re-rank candidate_ids with raw token embeddings ...

Attribution

Python port of the C++ implementation in Google's graph-mining project, licensed under Apache 2.0.

Low-rank SimHash extension inspired by EGGROLL: Evolution Strategies at the Hyperscale (Sarkar et al., 2025).

See NOTICE for the full upstream attribution.


License

Apache 2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pymuvera-0.2.2.tar.gz (24.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pymuvera-0.2.2-py3-none-any.whl (28.7 kB view details)

Uploaded Python 3

File details

Details for the file pymuvera-0.2.2.tar.gz.

File metadata

  • Download URL: pymuvera-0.2.2.tar.gz
  • Upload date:
  • Size: 24.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pymuvera-0.2.2.tar.gz
Algorithm Hash digest
SHA256 d8cd25e4206fef9b65a7760746e69e92e6ae710f8afc6f43fe908b05f26b684a
MD5 03603f16868fa529fefc69f68a069035
BLAKE2b-256 af96cd801a8cff84a043ec465e3140e02ba1246c2c523be86fc3dbb38fd48002

See more details on using hashes here.

Provenance

The following attestation bundles were made for pymuvera-0.2.2.tar.gz:

Publisher: ci.yml on smarthi/muvera-fde

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pymuvera-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: pymuvera-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 28.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pymuvera-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 a5a28d65c44260b8b76e0420191a35be4abc0d1b5d98e3b1ce93ad8914030864
MD5 f71101804a5b595d2390c6d6cdadf5e7
BLAKE2b-256 f9740b1d02cf41dcb1e41e722132ecc2642a7193ad851c177b61dd29d899ccf7

See more details on using hashes here.

Provenance

The following attestation bundles were made for pymuvera-0.2.2-py3-none-any.whl:

Publisher: ci.yml on smarthi/muvera-fde

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page