MCP server for RabbitMQ with semantic discovery (search-ids, get-id, call-id)
Project description
RabbitMQ MCP Server
Production-ready MCP server for RabbitMQ with semantic operation discovery and built-in CLI. Manage queues, exchanges, and bindings through natural language or command-line interface.
โจ Features
- ๐ Semantic Discovery: Find operations using natural language queries
- ๐ ๏ธ Three MCP Tools:
search-ids,get-id,call-idpattern for unlimited operations - ๐ Essential Operations: List, create, and delete queues, exchanges, and bindings
- ๐ Safety First: Built-in validations prevent data loss (queue message checks, binding validation)
- ๐ Client-Side Pagination: Handle 1000+ resources efficiently
- ๐ Structured Logging: Enterprise-grade audit logs with correlation IDs
- ๐ High Performance: < 100ms semantic search, < 2s list operations, < 1s CRUD operations
- ๐งช Well Tested: 80%+ code coverage with unit, integration, and contract tests
- ๐ OpenAPI-Driven: All operations defined in OpenAPI 3.0.3 specification
๐ Quick Start
Using uvx (Recommended)
No installation needed! Run directly:
# List all queues
uvx rabbitmq-mcp-server queue list \
--host localhost \
--user guest \
--password guest
# Create a durable queue
uvx rabbitmq-mcp-server queue create \
--name orders-queue \
--durable
# Create a topic exchange
uvx rabbitmq-mcp-server exchange create \
--name order-events \
--type topic \
--durable
# Create a binding with wildcards
uvx rabbitmq-mcp-server binding create \
--exchange order-events \
--queue orders-queue \
--routing-key "orders.*.created"
Using pip
pip install rabbitmq-mcp-server
rabbitmq-mcp-server queue list --help
Using uv (Development)
git clone https://github.com/guercheLE/rabbitmq-mcp-server.git
cd rabbitmq-mcp-server
uv pip install -e ".[dev]"
rabbitmq-mcp-server queue list
๐ Documentation
- API Reference: Complete operation documentation
- Architecture: System design and technical decisions
- Examples: Practical use cases and integration examples
- Contributing: How to contribute to the project
๐ฏ Use Cases
Monitoring Queue Depths
# List queues with message counts
uvx rabbitmq-mcp-server queue list --verbose --format json | \
jq '.items[] | select(.messages > 0) | {name, messages, consumers}'
Event-Driven Architecture Setup
# Create topic exchange for order events
uvx rabbitmq-mcp-server exchange create --name order-events --type topic --durable
# Create queues for different services
uvx rabbitmq-mcp-server queue create --name inventory-service --durable
uvx rabbitmq-mcp-server queue create --name shipping-service --durable
# Bind with routing patterns
uvx rabbitmq-mcp-server binding create \
--exchange order-events \
--queue inventory-service \
--routing-key "orders.*.created"
uvx rabbitmq-mcp-server binding create \
--exchange order-events \
--queue shipping-service \
--routing-key "orders.*.fulfilled"
Dead Letter Queue Setup
# Create DLX and DLQ
uvx rabbitmq-mcp-server exchange create --name dlx --type direct --durable
uvx rabbitmq-mcp-server queue create --name dlq --durable
uvx rabbitmq-mcp-server binding create --exchange dlx --queue dlq --routing-key failed
# Create main queue with DLX
uvx rabbitmq-mcp-server queue create \
--name orders-queue \
--durable \
--arguments '{
"x-dead-letter-exchange": "dlx",
"x-dead-letter-routing-key": "failed",
"x-message-ttl": 300000
}'
๐๏ธ Architecture
RabbitMQ MCP Server follows an OpenAPI-driven architecture with semantic discovery:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MCP Tools (3 Public) โ
โ โโโโโโโโโโโโโโ โโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ โ
โ โ search-ids โ โ get-id โ โ call-id โ โ
โ โ (< 100ms) โ โ (schema) โ โ (execute) โ โ
โ โโโโโโโโโโโโโโ โโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ OpenAPI Operation Registry โ
โ (Generated from OpenAPI 3.0.3 Spec) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Operation Executors (Queues, โ
โ Exchanges, Bindings) + Validators โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ RabbitMQ Management API (HTTP) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Key Design Principles:
- Three-Tool Pattern: Semantic discovery supports unlimited operations without tool explosion
- OpenAPI as Source of Truth: All operations defined in OpenAPI specs, code generated
- Safety by Design: Built-in validations prevent data loss (queue messages, bindings)
- Performance-First: Mandatory pagination, connection pooling, 60s vhost cache
- Enterprise Logging: Structured JSON logs with correlation IDs, credential sanitization
MCP Tools
search-ids - Semantic search for operations using natural language
- Input: Natural language query (e.g., "list queues")
- Output: Ranked list of matching operation IDs with similarity scores
- Performance: < 100ms p95 latency
- Example:
{"query": "delete queue", "max_results": 5}
get-id - Get complete operation documentation
- Input: Operation ID from search results
- Output: Full schema, parameters, examples, usage hints
- Performance: < 50ms p95 latency (O(1) lookup)
- Example:
{"operation_id": "queues.delete"}
call-id - Execute RabbitMQ Management API operations
- Input: Operation ID + parameters
- Output: HTTP response with result, metadata, correlation_id
- Performance: < 200ms p95 latency for typical GET requests
- Features: Connection pooling, TLS/SSL support, error mapping
- Example:
{"operation_id": "queues.list", "parameters": {"vhost": "/"}}
connection.health - Get RabbitMQ connection health status (Story 2.4)
- Input: None (no parameters required)
- Output: Health status with AMQP and HTTP connection states
- Performance: < 1 second (health checks run every 30 seconds by default)
- Fields:
amqp_connected,http_connected,consecutive_failures,last_check,latency_ms - Example: Returns
{"amqp_connected": true, "http_connected": true, "last_check": "2025-12-29T19:51:00Z", ...}
Workflow Example (search โ document โ execute):
1. search-ids: {"query": "list queues in vhost"}
โ ["queues.list", "queues.get", ...]
2. get-id: {"operation_id": "queues.list"}
โ {schema, parameters: [{name: "vhost", required: true}], ...}
3. call-id: {"operation_id": "queues.list", "parameters": {"vhost": "/"}}
โ {status: "success", result: [...queues...], correlation_id: "uuid"}
Topology Operations (Story 3.1)
queues.list - List all queues in a virtual host with statistics
-
Input Parameters:
vhost(required): Virtual host name (e.g., "/" for default)name_pattern(optional): Regex pattern to filter queue namesstate(optional): Filter by state: "running", "idle", or "flow"consumers(optional): Filter by consumer countconsumer_operator(optional): Operator for consumer filter: "=" (equals), ">" (greater than), "<" (less than). Default: "="sort_by(optional): Sort field: "name", "messages", "consumers", or "memory"reverse(optional): Sort in descending order (default: false)page(optional): Page number, 0-indexed (default: 0)page_size(optional): Items per page, 1-1000 (default: 100)
-
Output: List of QueueDetails with:
- Configuration:
name,vhost,durable,auto_delete,arguments - Statistics:
messages,messages_ready,messages_unacknowledged,consumers,memory - State:
running(active),idle(no activity),flow(flow control active) - Formatted memory: Human-readable memory usage (e.g., "1.5 MB", "512 KB")
- Configuration:
-
Performance: < 200ms for up to 1000 queues
-
Examples:
# List all queues in default vhost {"operation_id": "queues.list", "parameters": {"vhost": "/"}} # Filter queues by name pattern {"operation_id": "queues.list", "parameters": {"vhost": "/", "name_pattern": "orders-.*"}} # Sort by message count (descending) {"operation_id": "queues.list", "parameters": {"vhost": "/", "sort_by": "messages", "reverse": true}} # Find idle queues with no consumers {"operation_id": "queues.list", "parameters": {"vhost": "/", "state": "idle", "consumers": 0}} # Find queues with more than 5 consumers {"operation_id": "queues.list", "parameters": {"vhost": "/", "consumers": 5, "consumer_operator": ">"}} # Paginate results (page 2, 50 items per page) {"operation_id": "queues.list", "parameters": {"vhost": "/", "page": 1, "page_size": 50}}
-
Error Handling:
- Non-existent vhost: Returns
ValueError: "Virtual host '{vhost}' does not exist" - Empty vhost: Returns empty list
[](not an error) - Invalid regex pattern: Returns
ValueError: "Invalid regex pattern" - Invalid sort field: Returns
ValueError: "Unsupported sort field"
- Non-existent vhost: Returns
-
Troubleshooting:
- "Virtual host not found": Verify vhost exists with
vhosts.listoperation - Timeout errors: Check network connectivity and RabbitMQ Management API availability
- Slow response: Large queue counts may require pagination (use
page_sizeparameter)
- "Virtual host not found": Verify vhost exists with
Configuration (Environment Variables):
# AMQP Connection (Story 2.7)
AMQP_HOST=localhost # RabbitMQ AMQP host
AMQP_PORT=5672 # AMQP port (5672 plain, 5671 TLS)
AMQP_USER=guest # AMQP username
AMQP_PASSWORD=guest # AMQP password
AMQP_VHOST=/ # Virtual host
AMQP_USE_TLS=false # Enable TLS for AMQP (default: false)
AMQP_VERIFY_CERT=true # Verify TLS certificate (default: true)
AMQP_CA_BUNDLE=/path/to/ca.pem # Custom CA certificate bundle (optional)
# HTTP Management API Connection (Story 2.7)
HTTP_HOST=localhost # RabbitMQ Management API host
HTTP_PORT=15672 # HTTP port (15672 plain, 15671 TLS)
HTTP_USER=guest # HTTP username
HTTP_PASSWORD=guest # HTTP password
HTTP_USE_TLS=false # Enable TLS for HTTP (default: false)
HTTP_VERIFY_CERT=true # Verify TLS certificate (default: true)
HTTP_CA_BUNDLE=/path/to/ca.pem # Custom CA certificate bundle (optional)
HTTP_TIMEOUT=30 # Request timeout (1-300s, default: 30s)
MAX_HTTP_CONNECTIONS=5 # Pool size (1-20, default: 5)
# Health Checks & Reconnection
HEALTH_CHECK_INTERVAL=30 # Health check interval (5-300s, default: 30s)
POOL_HEALTH_CHECK_INTERVAL=30 # HTTP pool health checks (5-300s, default: 30s)
RECONNECT_MAX_DELAY=60 # Max reconnection backoff (10-300s, default: 60s)
# Legacy Aliases (deprecated, use AMQP_* and HTTP_* instead)
RABBITMQ_BASE_URL=http://localhost:15672
RABBITMQ_USERNAME=guest
RABBITMQ_PASSWORD=guest
RABBITMQ_VERIFY_SSL=true
RABBITMQ_TIMEOUT=30
RABBITMQ_POOL_SIZE=5
TLS/SSL Security (Story 2.7):
The server supports TLS/SSL encryption for both AMQP and HTTP connections:
- Protocol: TLS 1.3 preferred, TLS 1.2+ minimum
- Certificate Verification: Enabled by default for security
- Custom CA Certificates: Support for custom certificate authorities
- Self-Signed Certificates: Allowed for development (logs security warning)
Production TLS Setup:
# Enable TLS for both AMQP and HTTP
AMQP_USE_TLS=true
AMQP_PORT=5671 # Standard AMQP TLS port
AMQP_VERIFY_CERT=true # Verify server certificate
HTTP_USE_TLS=true
HTTP_PORT=15671 # Standard Management API TLS port
HTTP_VERIFY_CERT=true # Verify server certificate
Custom CA Certificates (for private/internal CAs):
# Point to your CA bundle (PEM format)
AMQP_CA_BUNDLE=/etc/ssl/certs/company-ca-bundle.pem
HTTP_CA_BUNDLE=/etc/ssl/certs/company-ca-bundle.pem
Development with Self-Signed Certificates:
# Generate self-signed certificate for testing
openssl req -x509 -newkey rsa:4096 \
-keyout key.pem -out cert.pem \
-days 365 -nodes \
-subj "/CN=localhost"
# Disable certificate verification (NOT for production!)
AMQP_USE_TLS=true
AMQP_VERIFY_CERT=false # โ ๏ธ Security warning logged
HTTP_USE_TLS=true
HTTP_VERIFY_CERT=false # โ ๏ธ Security warning logged
Troubleshooting TLS Errors:
-
Certificate Expired:
SSL certificate verification failed: Certificate has expired- Solution: Renew certificate before expiration, rotate in RabbitMQ config
-
Hostname Mismatch:
SSL certificate verification failed: Hostname mismatch in certificate- Solution: Ensure certificate CN/SAN matches
AMQP_HOST/HTTP_HOSTvalue
- Solution: Ensure certificate CN/SAN matches
-
Untrusted CA:
SSL certificate verification failed: Untrusted or self-signed certificate- Solution: Add CA certificate to system trust store OR use
*_CA_BUNDLEenv vars
- Solution: Add CA certificate to system trust store OR use
-
TLS Handshake Timeout:
TLS handshake timeout after 30s- Solution: Check network connectivity, firewall rules for TLS ports (5671, 15671)
Security Best Practices:
- โ
Always enable TLS in production (
*_USE_TLS=true) - โ
Always verify certificates in production (
*_VERIFY_CERT=true) - โ Rotate certificates before expiration (monitor expiry dates)
- โ Use strong cipher suites (TLS 1.3 default provides modern ciphers)
- โ Store passwords in environment variables, never in config files committed to Git
- โ ๏ธ Only disable verification for local development with self-signed certificates
- โ ๏ธ Monitor security warnings in logs (
security_warning=truefield for alerting)
HTTP Connection Pooling:
The server uses connection pooling to improve performance and resource utilization:
- Connection Reuse: Up to 5 concurrent connections reused across requests
- Pool Blocking: When pool is full, requests wait up to 10 seconds
- Idle Timeout: Unused connections closed after 60 seconds
- Health Monitoring: Automatic detection and eviction of stale connections
- Proactive Checks: Background health validation every 30 seconds
Monitor pool health using the MCP tool:
# Via MCP protocol
{"method": "tools/call", "params": {"name": "connection.pool_status"}}
# Returns metrics:
# - active_connections, pool_size, health_score
# - evictions_last_hour, pool_exhaustion_count
# - total_requests, successful_requests, failed_requests
Troubleshooting pool exhaustion:
- Increase
MAX_HTTP_CONNECTIONS(up to 20) for high-concurrency workloads - Check health_score < 0.90 warnings in logs
- Review eviction events for connection issues
See [Architecture Documentation](docs/ARCHITECTURE.md) for details.
## ๐ง Requirements
- **Python**: 3.12 or higher
- **RabbitMQ**: 3.8+ with Management plugin enabled
- **Docker**: Optional, for integration tests
## ๐ Supported RabbitMQ Versions
This MCP server supports multiple RabbitMQ Management API versions to ensure compatibility with different RabbitMQ installations:
- **RabbitMQ 3.11.x** (API version: 3.11)
- **RabbitMQ 3.12.x** (API version: 3.12)
- **RabbitMQ 3.13.x** (API version: 3.13) - **Default**
### Setting the API Version
The API version is controlled via the `RABBITMQ_API_VERSION` environment variable:
#### Docker Compose
```yaml
services:
rabbitmq-mcp:
environment:
RABBITMQ_API_VERSION: "3.12"
Systemd Service
[Service]
Environment="RABBITMQ_API_VERSION=3.12"
Manual/CLI
export RABBITMQ_API_VERSION=3.12
rabbitmq-mcp-server queue list
.env File
# .env
RABBITMQ_API_VERSION=3.12
Default Behavior
If RABBITMQ_API_VERSION is not set, the server defaults to version 3.13 (latest).
Version-Specific Files
The server uses version-specific artifacts:
- OpenAPI specifications:
data/openapi-{version}.yaml(committed to repository) - Generated schemas:
data/schemas-{version}.py(generated at build time) - Operations registry:
data/operations-{version}.json(generated at build time) - Embeddings:
data/embeddings-{version}.json(generated at build time)
Changing Versions
โ ๏ธ Important: API version selection happens at server startup. To change versions, you must restart the server.
Troubleshooting
If you encounter errors related to missing OpenAPI files or artifacts:
- Ensure the OpenAPI file exists:
data/openapi-{version}.yaml - Regenerate artifacts for your version:
uv run python scripts/generate_schemas.py --api-version 3.12 uv run python scripts/extract_operations.py --api-version 3.12 uv run python scripts/generate_embeddings.py --api-version 3.12
๐ฆ Installation Options
Option 1: uvx (Recommended for end-users)
# No installation needed, runs in isolated environment
uvx rabbitmq-mcp-server queue list
Option 2: pip (Traditional)
pip install rabbitmq-mcp-server
rabbitmq-mcp-server --version
Option 3: uv (Recommended for developers)
# Clone repository
git clone https://github.com/guercheLE/rabbitmq-mcp-server.git
cd rabbitmq-mcp-server
# Install with uv
uv pip install -e ".[dev,vector]"
# Run tests
pytest --cov=src
๐งช Testing
# Run all tests with coverage
pytest --cov=src --cov-report=html
# Run only unit tests (fast)
pytest tests/unit/
# Run integration tests (requires Docker)
pytest tests/integration/
# Run specific test
pytest tests/unit/test_validation.py -v
Test Coverage: 80%+ minimum, 95%+ for critical paths
๐ป Development Workflow
Setting Up Your Development Environment
Option 1: Dev Container (Recommended) ๐
The easiest way to get started - everything is pre-configured:
# Prerequisites: Docker Desktop + VS Code with Dev Containers extension
# 1. Clone the repository
git clone https://github.com/guercheLE/rabbitmq-mcp-server.git
cd rabbitmq-mcp-server
# 2. Open in VS Code
code .
# 3. Click "Reopen in Container" when prompted
# That's it! Python 3.12, uv, RabbitMQ, and all tools are ready to use
See .devcontainer/README.md for details.
Option 2: Manual Setup
# Clone the repository
git clone https://github.com/guercheLE/rabbitmq-mcp-server.git
cd rabbitmq-mcp-server
# Install dependencies with uv (recommended)
uv sync --all-extras
# Install pre-commit hooks
uv run pre-commit install
# Start RabbitMQ (via Docker)
docker run -d -p 5672:5672 -p 15672:15672 rabbitmq:3-management
Code Quality Tools
This project uses automated code quality checks enforced via pre-commit hooks and CI/CD:
Pre-commit Hooks (run automatically on git commit):
- black: Code formatting (88 character line length)
- isort: Import sorting (black-compatible profile)
- ruff: Fast Python linting (replaces flake8/pylint)
- mypy: Static type checking (strict mode)
Running Quality Checks Manually:
# Format code with black
uv run black .
# Sort imports with isort
uv run isort .
# Run linting with ruff
uv run ruff check .
# Run type checking with mypy
uv run mypy src/
# Run all pre-commit hooks manually
uv run pre-commit run --all-files
# Run tests with coverage
uv run pytest
CI/CD Pipeline
The GitHub Actions CI/CD pipeline runs automatically on:
- Pull requests to
mainbranch - Pushes to
mainbranch
Quality Gates (all must pass):
- โ All tests pass (pytest with zero failures)
- โ Linting passes (ruff with zero errors)
- โ Type checking passes (mypy strict mode with zero errors)
- โ Code coverage >80% (enforced via pytest-cov)
Pipeline Features:
- Tests run on Python 3.12 and 3.13
- Dependency caching for faster runs (<5 minutes typical)
- Parallel job execution (tests, linting, type checking run concurrently)
- Coverage reports uploaded to Codecov
Status Badges:
OpenAPI Specification
The RabbitMQ Management API OpenAPI specification serves as the single source of truth for all operations, schemas, and documentation in this project.
Location: docs-bmad/rabbitmq-http-api-openapi.yaml
Features:
- 127+ RabbitMQ Management API operations fully documented
- OpenAPI 3.0.3 compliant specification
- Unique operationId for every operation (format:
namespace.action) - Complete request/response schemas with validation rules
- All operations include descriptions, parameters, and response definitions
Validation:
The OpenAPI specification is automatically validated in the CI/CD pipeline to ensure:
- Structural compliance with OpenAPI 3.0 schema
- All operations have unique operationId values
- No missing required fields or invalid schema references
Run validation locally:
# Validate the OpenAPI specification
uv run validate-openapi
# Or with custom path
uv run python scripts/validate_openapi.py --spec-path docs-bmad/rabbitmq-http-api-openapi.yaml
Schema Generation
Pydantic models are automatically generated from the OpenAPI specification to ensure type-safe validation synchronized with the API specification.
Generated File: src/schemas/generated_schemas.py
Features:
- Automatic generation from OpenAPI component schemas
- Type-safe Pydantic v2 models with complete type annotations
- Field validation with constraints (min/max length, patterns, enums)
- RabbitMQ-specific validators for queue names, vhosts, exchange types
- Change detection to skip regeneration if OpenAPI unchanged
Generate Schemas:
# Generate with default paths
uv run python scripts/generate_schemas.py
# Generate with custom paths
uv run python scripts/generate_schemas.py \
--spec-path docs-bmad/rabbitmq-http-api-openapi.yaml \
--output-path src/schemas/generated_schemas.py
# Force regeneration (skip change detection)
uv run python scripts/generate_schemas.py --force
When to Regenerate:
- After modifying the OpenAPI specification
- When adding new component schemas
- When changing field types, constraints, or descriptions
- CI/CD automatically validates generated schemas match OpenAPI
Notes:
- Generated file includes header comment with timestamp and source path
- File is committed to version control for type checking and IDE support
- Do not edit generated file manually - changes will be overwritten
- All validations pass
mypy --stricttype checking
For more details, see:
- Pydantic v2 Documentation
- Architecture Decision Record: ADR-008 (Pydantic for All Validation)
Code Generation:
All Pydantic schemas, operation registries, and semantic embeddings are generated from this OpenAPI specification at build time. This ensures:
- Zero drift between documentation and implementation
- Type-safe operations with compile-time validation
- Consistent API surface across all tools
For more details, see:
- RabbitMQ Management API Documentation
- OpenAPI 3.0 Specification
- Architecture Decision Record: ADR-001 (OpenAPI-Driven Code Generation)
Operation Registry
The Operation Registry is a JSON file containing metadata for all RabbitMQ Management API operations and AMQP protocol operations. It serves as the foundation for semantic discovery and dynamic operation execution.
Generated File: data/operations.json
Features:
- 132+ operations extracted from OpenAPI specification
- Complete operation metadata: operation_id, namespace, HTTP method, URL path, parameters, schemas
- AMQP protocol operations (publish, consume, ack, nack, reject) with message properties
- O(1) operation lookups by operation_id (dict structure)
- Metadata enrichment: deprecated flags, safety validation requirements, rate limit exemptions
- File size < 5MB for fast loading and distribution
- Load time < 100ms, operation lookup < 1ms (benchmarked)
Generate Operation Registry:
# Generate with default paths
uv run python scripts/extract_operations.py
# Generate with custom paths
uv run python scripts/extract_operations.py \
--spec-path docs-bmad/rabbitmq-http-api-openapi.yaml \
--output-path data/operations.json
# Exclude AMQP operations (HTTP only)
uv run python scripts/extract_operations.py --no-include-amqp
Registry Structure:
The registry is a JSON object with operation_id keys for O(1) lookup performance:
{
"model_version": "1.0.0",
"generated_at": "2025-12-26T21:48:27.686810+00:00",
"openapi_source": "docs-bmad/rabbitmq-http-api-openapi.yaml",
"total_operations": 132,
"operations": {
"queues.list": {
"operation_id": "queues.list",
"namespace": "queues",
"http_method": "GET",
"url_path": "/api/queues",
"description": "List all queues",
"parameters": [
{
"name": "page",
"location": "query",
"type": "integer",
"required": false,
"description": "Page number for pagination"
}
],
"request_schema": null,
"response_schema": {"$ref": "#/components/schemas/Queue", "name": "Queue"},
"tags": ["Queues"],
"requires_auth": true,
"protocol": "http",
"deprecated": false,
"rate_limit_exempt": false,
"safety_validation_required": false
},
"amqp.publish": {
"operation_id": "amqp.publish",
"namespace": "amqp",
"http_method": "",
"url_path": "",
"description": "Publish a message to an exchange using AMQP protocol",
"parameters": [
{
"name": "exchange",
"location": "amqp",
"type": "string",
"required": true,
"description": "Exchange name to publish to"
}
],
"protocol": "amqp",
"deprecated": false
}
}
}
Using the Operation Registry:
import json
# Load registry (fast: <100ms)
with open("data/operations.json") as f:
registry = json.load(f)
# O(1) lookup by operation_id (fast: <1ms)
operation = registry["operations"]["queues.list"]
print(operation["http_method"]) # GET
print(operation["url_path"]) # /api/queues
# Filter operations by namespace
queue_ops = [
op for op in registry["operations"].values()
if op["namespace"] == "queues"
]
# Filter by protocol (HTTP vs AMQP)
amqp_ops = [
op for op in registry["operations"].values()
if op["protocol"] == "amqp"
]
When to Regenerate:
- After modifying the OpenAPI specification
- When adding new operations or changing operation metadata
- When updating AMQP operation definitions
- CI/CD automatically validates registry synchronization with OpenAPI
Semantic Embeddings
The Semantic Embeddings system enables natural language search over RabbitMQ operations using pre-computed vector embeddings. This powers the search-ids MCP tool for semantic discovery.
Generated File: data/embeddings.json
Features:
- 132+ pre-computed 384-dimensional vector embeddings
- Uses sentence-transformers model
all-MiniLM-L6-v2for optimal speed/quality balance - Normalized vectors (unit length) for efficient cosine similarity calculations
- File size < 2MB for fast loading and distribution
- Load time < 500ms, query time < 100ms (benchmarked)
- Supports multi-language queries (matches operation descriptions)
Generate Embeddings:
# Generate with default paths
uv run python scripts/generate_embeddings.py
# Generate with custom paths
uv run python scripts/generate_embeddings.py \
--registry-path data/operations.json \
--output-path data/embeddings.json \
--model-name all-MiniLM-L6-v2
First Run: The sentence-transformers model (~90MB) will download automatically to ~/.cache/torch/sentence_transformers/ on first run. Subsequent runs are faster using the cached model.
Embedding Structure:
The embeddings file contains metadata and pre-computed vectors for all operations:
{
"model_name": "sentence-transformers/all-MiniLM-L6-v2",
"model_version": "2.6.0",
"embedding_dimension": 384,
"generation_timestamp": "2025-12-26T20:05:50.746566",
"embeddings": {
"queues.list": [0.123, -0.456, 0.789, ...],
"exchanges.create": [-0.234, 0.567, -0.890, ...],
...
}
}
Testing Embedding Quality:
# Run quality tests with semantic queries
uv run python scripts/test_embeddings.py
# Example output:
# Query: 'listar filas'
# โ 1. queues.list 1.0000
# 2. queues_detailed.list 0.7831
# 3. rebalance_queues.list 0.6435
Performance Benchmarks:
# Benchmark loading and query performance
uv run python scripts/benchmark_embeddings.py
# Example output:
# Load time: 11.86 ms (target: <500ms)
# Query time: 9.72 ms (target: <100ms)
# Embeddings count: 132
# File size: 1.36 MB (target: <50MB)
Using Embeddings in Code:
import json
import numpy as np
from sentence_transformers import SentenceTransformer
# Load embeddings (fast: <500ms)
with open("data/embeddings.json") as f:
data = json.load(f)
embeddings_dict = data["embeddings"]
op_ids = list(embeddings_dict.keys())
embeddings = np.array([embeddings_dict[op_id] for op_id in op_ids])
# Load model
model = SentenceTransformer("all-MiniLM-L6-v2")
# Semantic search (fast: <100ms)
query = "list queues"
query_embedding = model.encode(query, normalize_embeddings=True)
similarities = np.dot(embeddings, query_embedding)
# Get top 5 results
top_indices = np.argsort(similarities)[::-1][:5]
results = [(op_ids[i], similarities[i]) for i in top_indices]
When to Regenerate:
- After modifying operation descriptions in operations.json
- When adding new operations to the registry
- When changing the embedding model (requires all embeddings to be regenerated)
- CI/CD automatically validates embeddings are synchronized with operations.json
Model Selection Rationale:
all-MiniLM-L6-v2 was chosen for optimal performance:
- Speed: Fast inference (<10ms per query on CPU)
- Quality: High accuracy for short text similarity
- Size: Compact 384-dimensional vectors (vs 768 for larger models)
- Multi-language: Decent performance across languages including Portuguese
- Community: Well-maintained and widely used in production
For more details, see:
- Sentence Transformers Documentation
- Architecture Decision Record: ADR-004 (JSON-based Vector Storage)
- Architecture Decision Record: ADR-007 (Build-time vs Runtime Generation)
Notes:
- Registry file is committed to version control for distribution
- URL paths preserve parameter placeholders:
/api/queues/{vhost}/{name} - AMQP operations marked with
protocol: "amqp"(HTTP operations haveprotocol: "http") - Destructive operations (DELETE, purge, reset) marked with
safety_validation_required: true - Operation IDs follow format:
{namespace}.{action}(e.g.,queues.list,amqp.publish)
For more details, see:
- Architecture Decision Record: ADR-007 (Build-Time vs Runtime Generation)
- OpenAPI 3.0 Paths Object
- OpenAPI 3.0 Parameter Object
๐ค Contributing
We welcome contributions! Please see CONTRIBUTING.md for:
- Setting up development environment
- Code style guidelines
- Testing requirements
- Pull request process
Quick contribution guide:
# 1. Fork and clone
git clone https://github.com/your-username/rabbitmq-mcp-server.git
# 2. Create feature branch
git checkout -b feature/your-feature
# 3. Install dependencies
uv pip install -e ".[dev]"
# 4. Make changes and test
pytest --cov=src
# 5. Commit with conventional commit format
git commit -m "feat: add new feature"
# 6. Push and create PR
git push origin feature/your-feature
๐ Roadmap
- Queue operations (list, create, delete)
- Exchange operations (list, create, delete)
- Binding operations (list, create, delete)
- Semantic discovery with vector search
- Client-side pagination
- Safety validations
- Message publishing/consuming
- Advanced monitoring (message rates, connection stats)
- Plugin management operations
- Cluster management operations
- User and permission management
๐ License
This project is licensed under the GNU Lesser General Public License v3.0 or later (LGPL-3.0-or-later).
See LICENSE file for full text.
What this means:
- โ Use in proprietary software
- โ Modify and distribute
- โ Commercial use
- โ ๏ธ Must share modifications to library itself
- โ ๏ธ Must include license notice
๐ Acknowledgments
- RabbitMQ: For the excellent message broker and Management API
- MCP Protocol: For the Model Context Protocol specification
- ChromaDB: For local vector database capabilities
- sentence-transformers: For efficient semantic embeddings
๐ Support
- Documentation: docs/
- Issues: GitHub Issues
- Discussions: GitHub Discussions
๐ Links
- Repository: https://github.com/guercheLE/rabbitmq-mcp-server
- PyPI: https://pypi.org/project/rabbitmq-mcp-server/ (coming soon)
- Changelog: CHANGELOG.md
- MCP Protocol: https://modelcontextprotocol.io/
- RabbitMQ Docs: https://www.rabbitmq.com/documentation.html
Built with โค๏ธ using Python 3.12+, MCP Protocol, and OpenAPI 3.0.3
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file rabbitmq_mcp_server-3.4.0.tar.gz.
File metadata
- Download URL: rabbitmq_mcp_server-3.4.0.tar.gz
- Upload date:
- Size: 1.3 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c7dec566d53eab071b69909eab9d08bac21a7d4396336db717dbd02af674a218
|
|
| MD5 |
164d2874b955c3f0f01ef8d7908c30e4
|
|
| BLAKE2b-256 |
35aa27e78c45fc2ebfb7d0d860b4bcb13ad4e63c112d5f57a0d604a45717fd92
|
File details
Details for the file rabbitmq_mcp_server-3.4.0-py3-none-any.whl.
File metadata
- Download URL: rabbitmq_mcp_server-3.4.0-py3-none-any.whl
- Upload date:
- Size: 209.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
322611c172146137fb93de1f0a71c4cdd69e579a768da1fa042afd2eeb748a61
|
|
| MD5 |
9c3ff4f36ea921faf22f7431bfdfa7c9
|
|
| BLAKE2b-256 |
aded93662ae48b3383c827996fd2e1a1891d2b5be56a75c4e77a623aff2ee3a1
|