vLLM Semantic Router - Intelligent routing for Mixture-of-Models
Project description
vLLM Semantic Router
Intelligent Router for Mixture-of-Models (MoM).
GitHub: https://github.com/vllm-project/semantic-router
Quick Start
Installation
# Install from PyPI
pip install vllm-sr
# Or install from source (development)
cd src/vllm-sr
pip install -e .
Usage
# Initialize vLLM Semantic Router Configuration
vllm-sr init
# Start the router (includes dashboard)
# Provide your HF_TOKEN to run the evaluation tests; this is required for downloading the necessary datasets
HF_TOKEN=hf_xxx vllm-sr serve
# Open dashboard in browser
vllm-sr dashboard
# View logs
vllm-sr logs router
vllm-sr logs envoy
vllm-sr logs dashboard
# Check status
vllm-sr status
# Stop
vllm-sr stop
Features
- Router: Intelligent request routing based on intent classification
- Envoy Proxy: High-performance proxy with ext_proc integration
- Dashboard: Web UI for monitoring and testing (http://localhost:8700)
- Metrics: Prometheus metrics endpoint (http://localhost:9190/metrics)
Endpoints
After running vllm-sr serve, the following endpoints are available:
| Endpoint | Port | Description |
|---|---|---|
| Dashboard | 8700 | Web UI for monitoring and Playground |
| API | 8888* | Chat completions API (configurable in config.yaml) |
| Metrics | 9190 | Prometheus metrics |
| gRPC | 50051 | Router gRPC (internal) |
| Jaeger UI | 16686 | Distributed tracing UI |
| Grafana (embedded) | 8700 | Dashboards at /embedded/grafana |
| Prometheus UI | 9090 | Metrics storage and querying |
*Default port, configurable via listeners in config.yaml
Observability
vllm-sr serve automatically starts the observability stack:
- Jaeger: Distributed tracing embedded at http://localhost:8700/embedded/jaeger (also available directly at http://localhost:16686)
- Grafana: Pre-configured dashboards embedded at http://localhost:8700/embedded/grafana
- Prometheus: Metrics collection at http://localhost:9090
Note: Grafana is optimized for embedded access through the dashboard. For the best experience, use http://localhost:8700/embedded/grafana where anonymous authentication is pre-configured.
Tracing is enabled by default. Traces are visible in Jaeger under the vllm-sr service name.
Configuration
Plugin Configuration
The CLI supports configuring plugins in your routing decisions. Plugins are per-decision behaviors that customize request handling (security, caching, customization, debugging).
Supported Plugin Types:
semantic-cache- Cache similar requests for performancejailbreak- Detect and block adversarial promptspii- Detect and enforce PII policiessystem_prompt- Inject custom system promptsheader_mutation- Add/modify HTTP headershallucination- Detect hallucinations in responsesrouter_replay- Record routing decisions for debugging
Plugin Examples:
- semantic-cache - Cache similar requests:
plugins:
- type: "semantic-cache"
configuration:
enabled: true
similarity_threshold: 0.92 # 0.0-1.0, higher = more strict
ttl_seconds: 3600 # Optional: cache TTL in seconds
- jailbreak - Block adversarial prompts:
plugins:
- type: "jailbreak"
configuration:
enabled: true
threshold: 0.8 # Optional: detection sensitivity 0.0-1.0
- pii - Enforce PII policies:
plugins:
- type: "pii"
configuration:
enabled: true
threshold: 0.7 # Optional: detection sensitivity 0.0-1.0
pii_types_allowed: ["EMAIL_ADDRESS"] # Optional: list of allowed PII types
- system_prompt - Inject custom instructions:
plugins:
- type: "system_prompt"
configuration:
enabled: true
system_prompt: "You are a helpful assistant."
mode: "replace" # "replace" (default) or "insert" (prepend)
- header_mutation - Modify HTTP headers:
plugins:
- type: "header_mutation"
configuration:
add:
- name: "X-Custom-Header"
value: "custom-value"
update:
- name: "User-Agent"
value: "SemanticRouter/1.0"
delete:
- "X-Old-Header"
- hallucination - Detect hallucinations:
plugins:
- type: "hallucination"
configuration:
enabled: true
use_nli: false # Optional: use NLI for detailed analysis
hallucination_action: "header" # "header", "body", or "none"
- router_replay - Record decisions for debugging:
plugins:
- type: "router_replay"
configuration:
enabled: true
max_records: 200 # Optional: max records in memory (default: 200)
capture_request_body: false # Optional: capture request payloads (default: false)
capture_response_body: false # Optional: capture response payloads (default: false)
max_body_bytes: 4096 # Optional: max bytes to capture (default: 4096)
Validation Rules:
- Plugin Type: Must be one of:
semantic-cache,jailbreak,pii,system_prompt,header_mutation,hallucination,router_replay - enabled: Must be a boolean (required for most plugins)
- threshold/similarity_threshold: Must be a float between 0.0 and 1.0
- max_records/max_body_bytes: Must be a positive integer
- ttl_seconds: Must be a non-negative integer
- pii_types_allowed: Must be a list of strings (if provided)
- system_prompt: Must be a string (if provided)
- mode: Must be "replace" or "insert" (if provided)
CLI Commands:
# Initialize config with plugin examples
vllm-sr init
# Validate configuration (including plugins)
vllm-sr validate config.yaml
# Generate router config with plugins
vllm-sr config router --config config.yaml
File Descriptor Limits
The CLI automatically sets file descriptor limits to 65,536 for Envoy proxy. To customize:
export VLLM_SR_NOFILE_LIMIT=100000 # Optional (min: 8192)
vllm-sr serve
License
Apache 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vllm_sr-0.1.0b2.dev20260217154622.tar.gz.
File metadata
- Download URL: vllm_sr-0.1.0b2.dev20260217154622.tar.gz
- Upload date:
- Size: 74.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
78e25bbef6932c30d46fcece391499bcc927df74bf34f9ce1c64b8677d3e12b8
|
|
| MD5 |
f3611dd94930e9a70e53a5fc759a266e
|
|
| BLAKE2b-256 |
4b5636d190088b1de5aa0f17eb635354b56a485c82905478dadf9a706c4eea29
|
File details
Details for the file vllm_sr-0.1.0b2.dev20260217154622-py3-none-any.whl.
File metadata
- Download URL: vllm_sr-0.1.0b2.dev20260217154622-py3-none-any.whl
- Upload date:
- Size: 76.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3ddefe3c7a7e61f04dd0bf9ce785e27920052226f98334e86ad119aa0a44daac
|
|
| MD5 |
234c94b2c0a054ebc031cfbde80cc090
|
|
| BLAKE2b-256 |
5832c8529ab333acd8de9be52facee22bf48eaf17d4441d4f0381c8f76092869
|