Python framework for building MCP servers with hierarchical tool prefixing, skills guides, and response spooling.
Project description
Spindl
A Python framework for building Model Context Protocol (MCP) servers with hierarchical tool name prefixing, on-demand skills guides, and built-in response spooling for large data sets.
The Problem
When an MCP client connects to multiple servers, tool name collisions cause confusion. If two servers both expose list_devices, the LLM doesn't know which one to call. Worse, tool descriptions and guides that reference other tools by name become ambiguous.
The Solution
Spindl solves this with three capabilities:
- Hierarchical prefix namespacing -- Every tool name is automatically prefixed with a server identity (e.g.
secops_list_devices), with an optional runtime instance prefix for multi-deployment scenarios (e.g.prod_secops_list_devices). - Skills guide tools -- Instead of bloating tool descriptions with lengthy usage guides, spindl auto-registers
list_toolsanddescribe_tooltools that the LLM can call on demand. - Response spooling -- Large array responses are automatically stored in an ephemeral SQLite database with query, aggregate, and distinct tools for efficient data exploration.
Installation
pip install cognisn-spindl
For HTTP/SSE transport support:
pip install cognisn-spindl[http]
For development:
pip install cognisn-spindl[dev]
Quick Start
import asyncio
from pydantic import BaseModel, Field
from spindl import MCPServer, BaseTool
class ListDevices(BaseTool):
name = "list_devices"
description = "List all monitored devices"
category = "inventory"
spooler_auto_detect = True # Auto-spool large responses
class InputModel(BaseModel):
limit: int = Field(default=50, ge=1, le=500, description="Max devices to return")
def guide(self) -> str:
return (
"# @list_devices\n\n"
"Returns all monitored devices. Large result sets are "
"automatically spooled -- use @spooler_query to filter "
"and paginate through the data.\n\n"
"## Examples\n\n"
'```json\n{"limit": 100}\n```\n'
)
async def execute(self, **params) -> dict:
validated = self.InputModel(**params)
# Your API logic here
devices = [{"id": i, "name": f"device-{i}"} for i in range(validated.limit)]
return {"success": True, "data": devices}
# Create server with prefix and optional spooler
from spindl import SpoolerConfig
server = MCPServer(
prefix="secops",
spooler=SpoolerConfig(), # Enables response spooling
)
server.register(ListDevices())
# Run on stdio (for Claude Desktop, Cursor, etc.)
asyncio.run(server.run_stdio())
This server exposes the following tools to the MCP client:
| Wire Name | Source |
|---|---|
secops_list_devices |
Your custom tool |
secops_list_tools |
Auto-registered skills guide |
secops_describe_tool |
Auto-registered skills guide |
secops_spooler_list |
Auto-registered (spooler enabled) |
secops_spooler_query |
Auto-registered (spooler enabled) |
secops_spooler_aggregate |
Auto-registered (spooler enabled) |
secops_spooler_distinct |
Auto-registered (spooler enabled) |
Architecture
MCPServer
/ | \
PrefixResolver | ToolRegistry
| |
MCP SDK Server Tools (bare names)
/ | \ |
stdio http sse Prefixed at boundary
Core Components
| Component | Module | Role |
|---|---|---|
MCPServer |
spindl.server |
Top-level orchestrator. Owns everything. |
PrefixResolver |
spindl.prefix |
Hierarchical prefix engine with @placeholder resolution |
ToolRegistry |
spindl.registry |
Stores tools by bare name, prefixes at the MCP boundary |
BaseTool |
spindl.tool |
Clean base class for tool authors |
ResponseSpooler |
spindl.spooler |
SQLite-backed large response handler |
Design Principles
- Tools never know their prefix. They are stored by bare name internally. Prefixing happens at the MCP protocol boundary.
- Guides use
@placeholdersyntax. Write@spooler_queryin your guide text; it resolves tosecops_spooler_query(orprod_secops_spooler_query) at runtime. - The spooler core is stdlib-only. No external dependencies beyond Python's built-in
sqlite3,json, andhashlib. - Transports are swappable. Call
run_stdio(),run_http(), orrun_sse()-- your tools don't change.
Prefix System
Level 1: Server Prefix (Code)
Set by the developer. Mandatory. Identifies the server.
server = MCPServer(prefix="secops")
# All tools: secops_list_devices, secops_list_tools, ...
Level 2: Instance Prefix (Runtime)
Optional. Identifies a deployment instance. Useful when the same server is deployed multiple times with different purposes.
Via environment variable:
export SPINDL_INSTANCE_PREFIX=prod
# All tools: prod_secops_list_devices, prod_secops_list_tools, ...
Via HTTP header (for HTTP/SSE transports):
X-Spindl-Prefix: prod
Header takes precedence over the environment variable. Each HTTP request can have a different prefix (isolated via contextvars).
Placeholder Resolution
Tool guides reference other tools using @bare_name syntax:
def guide(self) -> str:
return "Use @list_devices to get devices. Query results with @spooler_query."
At runtime, these resolve to fully prefixed wire names:
"Use secops_list_devices to get devices. Query results with secops_spooler_query."
Only registered tool names are replaced. Unknown @references pass through untouched.
Building Tools
Minimal Tool
from spindl import BaseTool
class Ping(BaseTool):
name = "ping"
description = "Health check"
category = "meta"
async def execute(self, **params) -> dict:
return {"success": True, "data": "pong"}
Tool with Parameters
from pydantic import BaseModel, Field
from spindl import BaseTool
class SearchVulns(BaseTool):
name = "search_vulns"
description = "Search vulnerability database"
category = "security"
spooler_array_paths = ["results"] # Spool the 'results' array
class InputModel(BaseModel):
query: str = Field(description="Search query")
severity: str | None = Field(default=None, description="Filter by severity")
limit: int = Field(default=50, ge=1, le=500, description="Max results")
def guide(self) -> str:
return (
"# @search_vulns\n\n"
"Search for vulnerabilities by keyword. Results are "
"automatically spooled when large.\n\n"
"## Parameters\n\n"
"- **query** (required): Search keywords\n"
"- **severity** (optional): Filter by critical/high/medium/low\n"
"- **limit** (optional): Max results (default 50)\n\n"
"## Examples\n\n"
'```json\n{"query": "CVE-2024", "severity": "critical"}\n```\n\n'
"## Follow-up\n\n"
"Use @spooler_query to filter results, @spooler_aggregate "
"for severity breakdowns, or @spooler_distinct to see "
"affected vendors.\n"
)
async def execute(self, **params) -> dict:
validated = self.InputModel(**params)
# Your search logic here
results = [...]
return {"success": True, "data": {"results": results}}
Tool Attributes Reference
| Attribute | Type | Default | Description |
|---|---|---|---|
name |
str |
"" |
Bare tool name (required) |
description |
str |
"" |
Short one-line description (required) |
category |
str |
"" |
Grouping key for skills guide (required) |
spooler_array_paths |
list[str] | None |
None |
Dot-notation paths to arrays to spool |
spooler_auto_detect |
bool |
False |
Auto-detect large arrays in response |
InputModel |
type[BaseModel] | None |
None |
Pydantic model for input validation |
Spooler Opt-In
Tools opt into response spooling via two attributes:
spooler_array_paths-- Explicit dot-notation paths:["results", "data.items"]spooler_auto_detect-- Let the spooler find arrays automatically
When the response array exceeds the configured thresholds (max_inline_items or max_inline_tokens), it's stored in SQLite and replaced with a summary containing the spool_id.
Response Spooler
Configuration
from spindl import MCPServer, SpoolerConfig
server = MCPServer(
prefix="secops",
spooler=SpoolerConfig(
db_path="/tmp/spooler.db", # SQLite file location
max_inline_tokens=2000, # Token threshold for spooling
max_inline_items=10, # Item count threshold
default_page_size=20, # Default records per query page
max_page_size=50, # Hard ceiling on page size
summary_sample_size=3, # Sample records in summary
db_cleanup_on_exit=True, # Delete DB on shutdown
),
)
All settings can also be configured via environment variables:
| Env Var | Default |
|---|---|
SPOOLER_DB_PATH |
/tmp/mcp_spooler.db |
SPOOLER_MAX_INLINE_TOKENS |
2000 |
SPOOLER_MAX_INLINE_ITEMS |
10 |
SPOOLER_DEFAULT_PAGE_SIZE |
20 |
SPOOLER_MAX_PAGE_SIZE |
50 |
SPOOLER_SUMMARY_SAMPLE_SIZE |
3 |
SPOOLER_CLEANUP_ON_EXIT |
true |
How It Works
- A tool returns a response with a large array
- The spooler detects the array exceeds thresholds
- The array is stored in SQLite with a unique
spool_id - The LLM receives a summary with record count, column names, statistics, sample records, and the
spool_id - The LLM uses the spooler query tools to explore the data
Spooler Query Tools
| Tool | Purpose |
|---|---|
{prefix}_spooler_list |
List all available spooled data sets |
{prefix}_spooler_query |
Filter, sort, paginate, search records |
{prefix}_spooler_aggregate |
Group-by with count/sum/avg/min/max |
{prefix}_spooler_distinct |
Unique values and frequency counts |
Skills Guide
Every spindl server auto-registers two tools:
{prefix}_list_tools
Returns all registered tools grouped by category. No parameters needed.
{
"success": true,
"data": {
"total_tools": 7,
"categories": {
"inventory": [
{"name": "secops_list_devices", "description": "List all monitored devices"}
],
"skills": [
{"name": "secops_list_tools", "description": "List all available tools..."},
{"name": "secops_describe_tool", "description": "Get detailed usage guide..."}
],
"spooler": [
{"name": "secops_spooler_list", "description": "List all spooled data sets..."},
...
]
}
}
}
{prefix}_describe_tool
Returns the detailed guide for a specific tool, with all @placeholders resolved to prefixed wire names.
{"tool_name": "secops_list_devices"}
Transports
stdio
For local MCP clients (Claude Desktop, Cursor, VS Code, etc.):
asyncio.run(server.run_stdio())
HTTP Streamable
For networked deployments. Requires pip install cognisn-spindl[http].
asyncio.run(server.run_http(host="0.0.0.0", port=8000))
The server reads the X-Spindl-Prefix header from each request for per-request instance prefixing.
SSE (Server-Sent Events)
For streaming connections. Requires pip install cognisn-spindl[http].
asyncio.run(server.run_sse(host="0.0.0.0", port=8000))
Response Types
Spindl includes self-contained response types for consistent tool output:
from spindl import ResponseEnvelope, ResponseMetadata, StructuredError, ErrorDetail
# Success response
return ResponseEnvelope(
success=True,
data={"items": results},
metadata=ResponseMetadata(
total_results=len(results),
returned_results=len(results),
),
).to_dict()
# Error response
return StructuredError(
error=ErrorDetail(
error_code="AUTH_ERROR",
error_message="Invalid API key",
retry_eligible=False,
suggestion="Check your API key configuration.",
),
).to_dict()
API Reference
MCPServer
MCPServer(
prefix: str, # Mandatory server prefix
spooler: SpoolerConfig | None, # Enable response spooling
server_name: str | None = None, # MCP server name (defaults to prefix)
)
Methods:
| Method | Description |
|---|---|
register(tool) |
Register a single tool |
register_all(tools) |
Register a list of tools |
run_stdio() |
Run on stdio transport |
run_http(host, port) |
Run on HTTP streamable transport |
run_sse(host, port) |
Run on SSE transport |
BaseTool
class MyTool(BaseTool):
name: str # Bare name (e.g. "get_devices")
description: str # Short description
category: str # Grouping key
spooler_array_paths: list[str] | None = None
spooler_auto_detect: bool = False
InputModel: type[BaseModel] | None = None
def guide(self) -> str: ... # Usage guide with @placeholders
async def execute(self, **params) -> dict: ... # Tool logic
PrefixResolver
resolver = PrefixResolver("secops")
resolver.prefixed_name("get_devices") # "secops_get_devices"
resolver.strip_prefix("secops_get_devices") # "get_devices"
resolver.resolve_placeholders("@get_devices") # "secops_get_devices"
resolver.set_instance_prefix("prod") # Per-request override
Requirements
- Python >= 3.12
mcp >= 1.25.0pydantic >= 2.0.0uvicorn >= 0.30.0(optional, for HTTP/SSE transports)
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cognisn_spindl-0.1.2.tar.gz.
File metadata
- Download URL: cognisn_spindl-0.1.2.tar.gz
- Upload date:
- Size: 34.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f7a2f7b3f9290e58b82b35db2d104ba60545e8f7a52285f3318f2532a89060fb
|
|
| MD5 |
60568e99a56972f9a8be263b6a3ae877
|
|
| BLAKE2b-256 |
445e341d8362ef9200961ea9a600f2da54942c25d93bb91f98bf8da3e67303c0
|
Provenance
The following attestation bundles were made for cognisn_spindl-0.1.2.tar.gz:
Publisher:
publish.yml on Cognisn/spindl
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cognisn_spindl-0.1.2.tar.gz -
Subject digest:
f7a2f7b3f9290e58b82b35db2d104ba60545e8f7a52285f3318f2532a89060fb - Sigstore transparency entry: 1276083821
- Sigstore integration time:
-
Permalink:
Cognisn/spindl@82befb9142dbc7abb9ff1347ad96dd810fb19ffa -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/Cognisn
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@82befb9142dbc7abb9ff1347ad96dd810fb19ffa -
Trigger Event:
release
-
Statement type:
File details
Details for the file cognisn_spindl-0.1.2-py3-none-any.whl.
File metadata
- Download URL: cognisn_spindl-0.1.2-py3-none-any.whl
- Upload date:
- Size: 39.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ccf73d0624f6bbf4c9e7bb60bb0d6f2c74d727746e1cf3cdd78c564b350cc70f
|
|
| MD5 |
d94573c56f9fc386e11daaf43863a973
|
|
| BLAKE2b-256 |
1538f1c98e63b545ca50e04d261b1c027c7fa5b1f7e35dcc0110b3645e3a11b6
|
Provenance
The following attestation bundles were made for cognisn_spindl-0.1.2-py3-none-any.whl:
Publisher:
publish.yml on Cognisn/spindl
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cognisn_spindl-0.1.2-py3-none-any.whl -
Subject digest:
ccf73d0624f6bbf4c9e7bb60bb0d6f2c74d727746e1cf3cdd78c564b350cc70f - Sigstore transparency entry: 1276083845
- Sigstore integration time:
-
Permalink:
Cognisn/spindl@82befb9142dbc7abb9ff1347ad96dd810fb19ffa -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/Cognisn
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@82befb9142dbc7abb9ff1347ad96dd810fb19ffa -
Trigger Event:
release
-
Statement type: