Skip to main content

High-throughput BGE-M3 inference engine with dense + sparse embeddings

Project description

m3serve

PyPI Python CI License: MIT

Lightweight async inference engine for BAAI/bge-m3 that returns dense and sparse embeddings in a single call — enabling hybrid retrieval without the overhead of a full LLM framework.

Install

pip install m3serve

Usage

from m3serve import Engine

engine = Engine(model_name="BAAI/bge-m3", use_fp16=True)
await engine.start()

result = await engine.embed(["hello world"], return_sparse=True)
# result.dense            -> list[list[float]]  (1024-dim)
# result.sparse_indices   -> list[list[int]]    (token ids with non-zero weight)
# result.sparse_weights   -> list[list[float]]  (corresponding weights)

await engine.stop()

How it works

Three background threads run in a pipeline so the GPU is never idle waiting for tokenisation or post-processing:

Thread 1  encode_pre   tokenise on CPU        ──┐
Thread 2  encode_core  GPU forward pass    ◄──┘  └──►
Thread 3  encode_post  convert to Python lists       └──► resolved Future

Incoming requests are queued and batched by token length (shorter sequences first) to minimise padding waste. Each embed() call is a coroutine that returns as soon as its batch is processed — no polling, no callbacks.

Options

Parameter Default Description
model_name "BAAI/bge-m3" Any bge-m3 compatible model
device auto-detected "cuda:0", "mps", "cpu"
use_fp16 True Half-precision inference (ignored on CPU)
torch_compile False torch.compile the backbone (CUDA only, adds warmup)
max_batch_size 256 Maximum sequences per GPU batch
batch_delay 0.005 Coalescing window in seconds — sleep after first item arrives to let concurrent requests accumulate. Set to ~½ × GPU inference time for your batch size.
tokenizer_threads 4 Number of threads dedicated to tokenization (token_lengths). Each thread holds its own tokenizer copy; all are pre-warmed at start() so no cold deepcopy happens during serving.
max_length 8192 Maximum token length per sequence. Longer inputs are truncated. Lower values reduce memory usage and improve throughput for short-text workloads.

Tuning batch_delay

When the queue goes from empty to non-empty, the preprocess thread sleeps for batch_delay seconds before consuming it. Any requests that arrive during that window get merged into the same GPU batch.

  • Low concurrency / latency-sensitive: use Engine(batch_delay=0). At c=1 the window is wasted because there is no one else to wait for.
  • High concurrency / throughput-focused: keep the default (0.005). Concurrent requests coalesce into larger batches, amortising the GPU's fixed per-forward-pass cost.

A good starting value is roughly half your typical GPU inference time. This heuristic is also used by Infinity-emb and mirrors Triton's max_queue_delay_microseconds.

Limitations

Single model, single GPU. m3serve runs one bge-m3 instance on one GPU. There is no replica support or multi-GPU sharding.

Coalescing window adds latency at low concurrency. The default batch_delay=0.005 sleeps 5 ms after the first request arrives to let concurrent requests accumulate into a larger batch. At c=1 this sleep is always wasted, adding ~5 ms to every request. Use Engine(batch_delay=0) for single-client or latency-sensitive workloads.

p99 latency can be spiky at medium concurrency. A request that just misses a coalescing window must wait for the next cycle. In practice this means p99 can be 5-10x higher than p50 at moderate concurrency levels (e.g. c=4 to c=8). If your workload has strict p99 SLAs, benchmark under your expected traffic pattern before deploying.

bge-m3 only. The engine is purpose-built for BAAI/bge-m3 and models with the same three-stage encode interface. It is not a general-purpose inference server.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

m3serve-0.1.6.tar.gz (7.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

m3serve-0.1.6-py3-none-any.whl (9.5 kB view details)

Uploaded Python 3

File details

Details for the file m3serve-0.1.6.tar.gz.

File metadata

  • Download URL: m3serve-0.1.6.tar.gz
  • Upload date:
  • Size: 7.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for m3serve-0.1.6.tar.gz
Algorithm Hash digest
SHA256 cb51569f013b6fbe85d692187893a850e365b4082741a6cde197b1c73cfc8014
MD5 e86dec5db73c6c9dfc9cfaf6f7511eae
BLAKE2b-256 eccb46bef635588e72dcbf794519cc38506cbaf8dbe201cd93acdafac1f811d0

See more details on using hashes here.

Provenance

The following attestation bundles were made for m3serve-0.1.6.tar.gz:

Publisher: publish.yml on MauroCE/m3serve

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file m3serve-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: m3serve-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 9.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for m3serve-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 27e563a0f7a3f59b163905eef1a38a9d0402ab756ddcc9c4d2a1d2496869cf33
MD5 66550e18b04ee39cf9fe89fe4e1e61f8
BLAKE2b-256 682526cfb00fc9245adaf33ead286ade5f30de0ff0bc560dbae8865d6646e2af

See more details on using hashes here.

Provenance

The following attestation bundles were made for m3serve-0.1.6-py3-none-any.whl:

Publisher: publish.yml on MauroCE/m3serve

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page