Skip to main content

High-throughput BGE-M3 inference engine with dense + sparse embeddings

Project description

m3serve

License: MIT CI

Lightweight async inference engine for BAAI/bge-m3 that returns dense and sparse embeddings in a single call — enabling hybrid retrieval without the overhead of a full LLM framework.

Install

pip install m3serve

Usage

from m3serve import Engine

engine = Engine(model_name="BAAI/bge-m3", use_fp16=True)
await engine.start()

result = await engine.embed(["hello world"], return_sparse=True)
# result.dense            -> list[list[float]]  (1024-dim)
# result.sparse_indices   -> list[list[int]]    (token ids with non-zero weight)
# result.sparse_weights   -> list[list[float]]  (corresponding weights)

await engine.stop()

How it works

Three background threads run in a pipeline so the GPU is never idle waiting for tokenisation or post-processing:

Thread 1  encode_pre   tokenise on CPU        ──┐
Thread 2  encode_core  GPU forward pass    ◄──┘  └──►
Thread 3  encode_post  convert to Python lists       └──► resolved Future

Incoming requests are queued and batched by token length (shorter sequences first) to minimise padding waste. Each embed() call is a coroutine that returns as soon as its batch is processed — no polling, no callbacks.

Options

Parameter Default Description
model_name "BAAI/bge-m3" Any bge-m3 compatible model
device auto-detected "cuda:0", "mps", "cpu"
use_fp16 True Half-precision inference (ignored on CPU)
torch_compile False torch.compile the backbone (CUDA only, adds warmup)
max_batch_size 256 Maximum sequences per GPU batch
batch_delay 0.005 Coalescing window in seconds — sleep after first item arrives to let concurrent requests accumulate. Set to ~½ × GPU inference time for your batch size.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

m3serve-0.1.3.tar.gz (6.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

m3serve-0.1.3-py3-none-any.whl (8.1 kB view details)

Uploaded Python 3

File details

Details for the file m3serve-0.1.3.tar.gz.

File metadata

  • Download URL: m3serve-0.1.3.tar.gz
  • Upload date:
  • Size: 6.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for m3serve-0.1.3.tar.gz
Algorithm Hash digest
SHA256 9367b625f09c2540e912635b505cadd62943daf2b25d21eda12cd85b40670797
MD5 7f479ca4eb1d784ec843cd79474a7c8c
BLAKE2b-256 b70697a8e66003952fb624338511b9ddd6c42aa06608c48250a03cc7ca949461

See more details on using hashes here.

Provenance

The following attestation bundles were made for m3serve-0.1.3.tar.gz:

Publisher: publish.yml on MauroCE/m3serve

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file m3serve-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: m3serve-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 8.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for m3serve-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 5b7396c3891d65c6fbb6f90efa3e27cf438888cd2e6205b2fe6b09e9007dfe31
MD5 7b67363018650b67e7b1dc6b7ed23099
BLAKE2b-256 5788c4a1f61ab79034038da3b76ed4ab7f47cb069bf098387ef6ce0537958fba

See more details on using hashes here.

Provenance

The following attestation bundles were made for m3serve-0.1.3-py3-none-any.whl:

Publisher: publish.yml on MauroCE/m3serve

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page