Package for implementing service discovery in a really lite way.
Project description
LiteRegistry
Lightweight service registry and discovery system for distributed model inference clusters. Built for deployments on HPC environments with load balancing and automatic failover.
Installation
pip install literegistry
Components
Registry (Key-Value Store)
The registry stores service metadata and health information. Choose between:
- FileSystem: Simple file-based storage for single-node setups
- Redis: Distributed storage for multi-node HPC clusters (recommended for production)
The registry tracks which model servers are available, their endpoints, and performance metrics.
vLLM Module
Wraps vLLM servers with automatic registry integration. When you launch vLLM through LiteRegistry, it:
- Auto-registers with the registry on startup
- Sends heartbeats to maintain active status
- Reports performance metrics
Gateway Server
HTTP reverse proxy that routes client requests to model servers. Features:
- OpenAI-compatible API endpoints (
/v1/completions,/v1/models,/classify) - Automatic load balancing based on server latency
- Model routing based on the
modelparameter in requests
CLI Tool
Command-line interface for monitoring your cluster:
- View registered models and server counts
- Check server health and request statistics
- Monitor latency metrics and request throughput
Client Library
Python API for programmatic interaction:
RegistryClient: Register servers and query available modelsRegistryHTTPClient: Make requests with automatic failover and retry
How Components Work Together
1. vLLM servers register themselves:
vLLM Instance → Registry (Redis/FS)
2. Client sends request to Gateway:
Client → Gateway Server
3. Gateway queries Registry and routes to best server:
Gateway → Registry (get available servers)
Gateway → vLLM Instance (send request)
4. Gateway reports metrics back:
Gateway → Registry (update latency/stats)
HPC Cluster Deployment
Complete workflow for deploying distributed model inference:
1. Start Redis Server
literegistry redis --port 6379
2. Launch vLLM Instances (supports all standard vLLM arguments)
literegistry vllm \
--model "meta-llama/Llama-3.1-8B-Instruct" \
--registry redis://login-node:6379 \
--tensor-parallel-size 4
3. Start Gateway Server
literegistry gateway \
--registry redis://login-node:6379 \
--host 0.0.0.0 \
--port 8080
4. Monitor Cluster
# Summary view
literegistry summary --registry redis://login-node:6379
Quick Start
Basic Usage
from literegistry import RegistryClient, get_kvstore
import asyncio
async def main():
# Auto-detect backend (redis:// or file path)
store = get_kvstore("redis://localhost:6379")
client = RegistryClient(store, service_type="model_path")
# Register a server
await client.register(
port=8000,
metadata={"model_path": "meta-llama/Llama-3.1-8B-Instruct"}
)
# List available models
models = await client.models()
print(models)
asyncio.run(main())
HTTP Client with Automatic Failover
from literegistry import RegistryHTTPClient
async with RegistryHTTPClient(client, "meta-llama/Llama-3.1-8B-Instruct") as http_client:
result, _ = await http_client.request_with_rotation(
"v1/completions",
{"prompt": "Hello"},
timeout=30,
max_retries=3
)
Storage Backends
LiteRegistry supports different backends depending on your deployment:
FileSystem - For single-node or shared filesystem environments
from literegistry import FileSystemKVStore
store = FileSystemKVStore("registry_data")
Use when: Running on a single machine or when all nodes share a filesystem (common in HPC clusters with NFS). Note: Can bottleneck with high concurrency.
Redis - For distributed multi-node clusters
from literegistry import RedisKVStore
store = RedisKVStore("redis://localhost:6379")
Use when: Running across multiple nodes without shared storage, or need high-concurrency access. Recommended for production HPC deployments.
Advanced Usage
Gateway API
The gateway provides OpenAI-compatible HTTP endpoints that work with existing tools:
# Send completion request
curl -X POST http://localhost:8080/v1/completions \
-H "Content-Type: application/json" \
-d '{"model": "meta-llama/Llama-3.1-8B-Instruct", "prompt": "Hello"}'
# List all available models
curl http://localhost:8080/v1/models
# Check gateway health
curl http://localhost:8080/health
The gateway automatically routes requests to the appropriate model server based on the model field.
Batch Processing with Parallel Requests
Process multiple requests concurrently with automatic load balancing:
async with RegistryHTTPClient(client, model) as http_client:
# Process 100 requests with max 5 concurrent
results = await http_client.parallel_requests(
"v1/completions",
payloads_list,
max_parallel_requests=5,
timeout=30,
max_retries=3
)
This is useful for batch inference workloads. The client handles retry logic and server rotation automatically.
Contributing
Contributions welcome! Please submit a Pull Request.
License
MIT License - see LICENSE file for details
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file literegistry-1.0.1.tar.gz.
File metadata
- Download URL: literegistry-1.0.1.tar.gz
- Upload date:
- Size: 24.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.8.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4783b20fc58f337f0c2b7623c988a5b0334f9ea1e134947b537ea7d4b076554b
|
|
| MD5 |
b7d0776fc179ee6e643c706fde7f3e44
|
|
| BLAKE2b-256 |
d67ff9af37ecdd0980eb2c16363105043b457bdb29c39f5ee95f50127871f50f
|
File details
Details for the file literegistry-1.0.1-py3-none-any.whl.
File metadata
- Download URL: literegistry-1.0.1-py3-none-any.whl
- Upload date:
- Size: 28.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.8.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
56485a223f906bc18c8b13fd3787f0fc4bc208111cf706881336df8bbba918d5
|
|
| MD5 |
53aa7580031b974d51119d444ad42316
|
|
| BLAKE2b-256 |
49909cdf2536f1fb79e97c3b5d074296f746dd38f3e06543f0bc3835b3ec23d5
|