Skip to main content

Package for implementing service discovery in a really lite way.

Project description

LiteRegistry

Lightweight service registry and discovery system for distributed model inference clusters. Built for deployments on HPC environments with load balancing and automatic failover.

Installation

pip install literegistry

Quick Start

Complete workflow for deploying distributed model inference:

1. Start Redis Server

literegistry redis --port 6379

2. Launch vLLM/SGLang Instances (supports all standard vLLM/SGLang arguments)

literegistry vllm \
  --model "meta-llama/Llama-3.1-8B-Instruct" \
  --registry redis://login-node:6379 \
  --tensor-parallel-size 4

3. Start Gateway Server

literegistry gateway \
  --registry redis://login-node:6379 \
  --host 0.0.0.0 \
  --port 8080

4. Interact with Gateway

The gateway provides OpenAI-compatible HTTP endpoints that work with existing tools:

# Send completion request
curl -X POST http://localhost:8080/v1/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "meta-llama/Llama-3.1-8B-Instruct", "prompt": "Hello"}'

# List all available models
curl http://localhost:8080/v1/models

# Check gateway health
curl http://localhost:8080/health

The gateway automatically routes requests to the appropriate model server based on the model field.

5. Monitor Cluster

# Summary view
literegistry summary --registry redis://login-node:6379

Using the Python API

Writting new servers

from literegistry import RegistryClient, get_kvstore
import asyncio

async def main():
    # Auto-detect backend (redis:// or file path)
    store = get_kvstore("redis://localhost:6379")
    client = RegistryClient(store, service_type="model_path")
    
    # Register a server
    await client.register(
        port=8000,
        metadata={"model_path": "meta-llama/Llama-3.1-8B-Instruct"}
    )
    
    # List available models
    models = await client.models()
    print(models)

asyncio.run(main())

HTTP Client with Automatic Failover

from literegistry import RegistryHTTPClient

async with RegistryHTTPClient(client, "meta-llama/Llama-3.1-8B-Instruct") as http_client:
    result, _ = await http_client.request_with_rotation(
        "v1/completions",
        {"prompt": "Hello"},
        timeout=30,
        max_retries=3
    )

Storage Backends

LiteRegistry supports different backends depending on your deployment:

FileSystem - For single-node or shared filesystem environments

from literegistry import FileSystemKVStore
store = FileSystemKVStore("registry_data")

Use when: Running on a single machine or when all nodes share a filesystem (common in HPC clusters with NFS). Note: Can bottleneck with high concurrency.

Redis - For distributed multi-node clusters

from literegistry import RedisKVStore
store = RedisKVStore("redis://localhost:6379")

Use when: Running across multiple nodes without shared storage, or need high-concurrency access. Recommended for production HPC deployments.

Citation

If you use LiteRegistry in your research, please cite:

@software{literegistry2025,
  title={literegistry: Lightweight Service Discovery for Distributed Model Inference},
  author={Faria, Gonçalo and Smith, Noah},
  year={2025},
  url={https://github.com/goncalorafaria/literegistry}
}

Contributing

Contributions welcome! Please submit a Pull Request.

License

MIT License - see LICENSE file for details

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

literegistry-1.0.2.tar.gz (27.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

literegistry-1.0.2-py3-none-any.whl (40.6 kB view details)

Uploaded Python 3

File details

Details for the file literegistry-1.0.2.tar.gz.

File metadata

  • Download URL: literegistry-1.0.2.tar.gz
  • Upload date:
  • Size: 27.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.15

File hashes

Hashes for literegistry-1.0.2.tar.gz
Algorithm Hash digest
SHA256 4b64af28460c192ee9ec40abf8ea73d8faa309c68cd5fe810a57d63c28d7d8aa
MD5 5b24c4d2a3a64d5f89dc717eedb83daa
BLAKE2b-256 5884cd63a8eb1deb9afcabe99de698f68bed70304aacb3ee98a11d43e85446e8

See more details on using hashes here.

File details

Details for the file literegistry-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: literegistry-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 40.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.15

File hashes

Hashes for literegistry-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 a5ba66f9a7d4e3f2820b52a10905309e5f5d61c032e239a858e6646aece5dfe0
MD5 1c964dfbbf7780f60a610c4c8fa7174d
BLAKE2b-256 5f559f427c8f8743f7bd926734aa1c219a38773c3f7effe2239cfa169d873353

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page