Skip to main content

Package for implementing service discovery in a really lite way.

Project description

LiteRegistry

Lightweight service registry and discovery system for distributed model inference clusters. Built for deployments on HPC environments with load balancing and automatic failover.

Installation

pip install literegistry

Components

Registry (Key-Value Store)

The registry stores service metadata and health information. Choose between:

  • FileSystem: Simple file-based storage for single-node setups
  • Redis: Distributed storage for multi-node HPC clusters (recommended for production)

The registry tracks which model servers are available, their endpoints, and performance metrics.

vLLM Module

Wraps vLLM servers with automatic registry integration. When you launch vLLM through LiteRegistry, it:

  • Auto-registers with the registry on startup
  • Sends heartbeats to maintain active status
  • Reports performance metrics

Gateway Server

HTTP reverse proxy that routes client requests to model servers. Features:

  • OpenAI-compatible API endpoints (/v1/completions, /v1/models, /classify)
  • Automatic load balancing based on server latency
  • Model routing based on the model parameter in requests

CLI Tool

Command-line interface for monitoring your cluster:

  • View registered models and server counts
  • Check server health and request statistics
  • Monitor latency metrics and request throughput

Client Library

Python API for programmatic interaction:

  • RegistryClient: Register servers and query available models
  • RegistryHTTPClient: Make requests with automatic failover and retry

How Components Work Together

1. vLLM servers register themselves:
   vLLM Instance → Registry (Redis/FS)
   
2. Client sends request to Gateway:
   Client → Gateway Server
   
3. Gateway queries Registry and routes to best server:
   Gateway → Registry (get available servers)
   Gateway → vLLM Instance (send request)
   
4. Gateway reports metrics back:
   Gateway → Registry (update latency/stats)

HPC Cluster Deployment

Complete workflow for deploying distributed model inference:

1. Start Redis Server

python -m literegistry.redis --port 6379

2. Launch vLLM Instances (supports all standard vLLM arguments)

python -m literegistry.vllm \
  --model "meta-llama/Llama-3.1-8B-Instruct" \
  --registry redis://login-node:6379 \
  --tensor-parallel-size 4

3. Start Gateway Server

python -m literegistry.gateway \
  --registry redis://login-node:6379 \
  --host 0.0.0.0 \
  --port 8080

4. Monitor Cluster

# Summary view
python -m literegistry.cli --mode summary --registry redis://login-node:6379

## Quick Start

### Basic Usage

```python
from literegistry import RegistryClient, get_kvstore
import asyncio

async def main():
    # Auto-detect backend (redis:// or file path)
    store = get_kvstore("redis://localhost:6379")
    client = RegistryClient(store, service_type="model_path")
    
    # Register a server
    await client.register(
        port=8000,
        metadata={"model_path": "meta-llama/Llama-3.1-8B-Instruct"}
    )
    
    # List available models
    models = await client.models()
    print(models)

asyncio.run(main())

HTTP Client with Automatic Failover

from literegistry import RegistryHTTPClient

async with RegistryHTTPClient(client, "meta-llama/Llama-3.1-8B-Instruct") as http_client:
    result, _ = await http_client.request_with_rotation(
        "v1/completions",
        {"prompt": "Hello"},
        timeout=30,
        max_retries=3
    )

Storage Backends

LiteRegistry supports different backends depending on your deployment:

FileSystem - For single-node or shared filesystem environments

from literegistry import FileSystemKVStore
store = FileSystemKVStore("registry_data")

Use when: Running on a single machine or when all nodes share a filesystem (common in HPC clusters with NFS). Note: Can bottleneck with high concurrency.

Redis - For distributed multi-node clusters

from literegistry import RedisKVStore
store = RedisKVStore("redis://localhost:6379")

Use when: Running across multiple nodes without shared storage, or need high-concurrency access. Recommended for production HPC deployments.

Advanced Usage

Gateway API

The gateway provides OpenAI-compatible HTTP endpoints that work with existing tools:

# Send completion request
curl -X POST http://localhost:8080/v1/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "meta-llama/Llama-3.1-8B-Instruct", "prompt": "Hello"}'

# List all available models
curl http://localhost:8080/v1/models

# Check gateway health
curl http://localhost:8080/health

The gateway automatically routes requests to the appropriate model server based on the model field.

Batch Processing with Parallel Requests

Process multiple requests concurrently with automatic load balancing:

async with RegistryHTTPClient(client, model) as http_client:
    # Process 100 requests with max 5 concurrent
    results = await http_client.parallel_requests(
        "v1/completions",
        payloads_list,
        max_parallel_requests=5,
        timeout=30,
        max_retries=3
    )

This is useful for batch inference workloads. The client handles retry logic and server rotation automatically.

Contributing

Contributions welcome! Please submit a Pull Request.

License

MIT License - see LICENSE file for details

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

literegistry-1.0.0.tar.gz (24.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

literegistry-1.0.0-py3-none-any.whl (28.6 kB view details)

Uploaded Python 3

File details

Details for the file literegistry-1.0.0.tar.gz.

File metadata

  • Download URL: literegistry-1.0.0.tar.gz
  • Upload date:
  • Size: 24.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.19

File hashes

Hashes for literegistry-1.0.0.tar.gz
Algorithm Hash digest
SHA256 b8be7ca3c93edf905d4f86b117b9c5272865b48ad407362617a125c55999e712
MD5 3dc50e0bece5f9bc82e7fad1e1225fd5
BLAKE2b-256 c804b4723aee5e605938c2c959186f37d53b39d45eff8db06c14aa640d9a7b5b

See more details on using hashes here.

File details

Details for the file literegistry-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: literegistry-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 28.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.19

File hashes

Hashes for literegistry-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 da2c759066a76e75ef26475ed2dc3fe0afaff5be788ad05c65bd9b68521d8a2f
MD5 5f8887fd2603d4b9737dc1303f396b41
BLAKE2b-256 4f20985ef1aeb96b1af2a26f64d15ab40907e58c95aebe029481c2d534109174

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page