Skip to main content

High-throughput BGE-M3 inference engine with dense + sparse embeddings

Project description

m3serve

License: MIT CI

Lightweight async inference engine for BAAI/bge-m3 that returns dense and sparse embeddings in a single call — enabling hybrid retrieval without the overhead of a full LLM framework.

Install

pip install m3serve

Usage

from m3serve import Engine

engine = Engine(model_name="BAAI/bge-m3", use_fp16=True)
await engine.start()

result = await engine.embed(["hello world"], return_sparse=True)
# result.dense            -> list[list[float]]  (1024-dim)
# result.sparse_indices   -> list[list[int]]    (token ids with non-zero weight)
# result.sparse_weights   -> list[list[float]]  (corresponding weights)

await engine.stop()

How it works

Three background threads run in a pipeline so the GPU is never idle waiting for tokenisation or post-processing:

Thread 1  encode_pre   tokenise on CPU        ──┐
Thread 2  encode_core  GPU forward pass    ◄──┘  └──►
Thread 3  encode_post  convert to Python lists       └──► resolved Future

Incoming requests are queued and batched by token length (shorter sequences first) to minimise padding waste. Each embed() call is a coroutine that returns as soon as its batch is processed — no polling, no callbacks.

Options

Parameter Default Description
model_name "BAAI/bge-m3" Any bge-m3 compatible model
device auto-detected "cuda:0", "mps", "cpu"
use_fp16 True Half-precision inference (ignored on CPU)
torch_compile False torch.compile the backbone (CUDA only, adds warmup)
max_batch_size 256 Maximum sequences per GPU batch

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

m3serve-0.1.2.tar.gz (5.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

m3serve-0.1.2-py3-none-any.whl (7.8 kB view details)

Uploaded Python 3

File details

Details for the file m3serve-0.1.2.tar.gz.

File metadata

  • Download URL: m3serve-0.1.2.tar.gz
  • Upload date:
  • Size: 5.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for m3serve-0.1.2.tar.gz
Algorithm Hash digest
SHA256 fd0918e3eba0106c7479ce08bc5d94fc0b6591e437cdc693404da61e8c0eaf0e
MD5 f4f286e7ea9d8c7622c49bfddf631c7e
BLAKE2b-256 75900f6744c4e0efde1131845443d06a54784b423fa358ab7201c53c13c665d4

See more details on using hashes here.

Provenance

The following attestation bundles were made for m3serve-0.1.2.tar.gz:

Publisher: publish.yml on MauroCE/m3serve

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file m3serve-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: m3serve-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 7.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for m3serve-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 0ce13c6dafaedfe65754fb21d2dab4fdb4d2f880e30a2e8e7f8de9104aff3303
MD5 ac3d9728cf78070de3fb7c8102a8688a
BLAKE2b-256 2b6674ab1bf73cac6400115e534d232ac5d1ab2319e947e173fad80d32473429

See more details on using hashes here.

Provenance

The following attestation bundles were made for m3serve-0.1.2-py3-none-any.whl:

Publisher: publish.yml on MauroCE/m3serve

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page