Develop AI skills locally in your editor, deploy to production when ready
Project description
Most coding assistants now support skills natively, so an MCP server just for skill discovery isn't necessary. Where this package adds value is making skills' execution deterministic and deployable — with a fixed entry point and controlled execution, skills developed in your editor can run in non-sandboxed production environments. It also supports incremental loading, so agents discover skills on demand instead of loading everything upfront.
MCP Skill Server
Build agent skills where you work. Write a Python script, add a SKILL.md, and your agent can use it immediately. Iterate in real-time as part of your daily workflow. When it's ready, deploy the same skill to production — no rewrite needed.
Why?
Most skill development looks like this: write code → deploy → test in a staging agent → realize it's wrong → redeploy → repeat. It's slow, and you never get to actually use the skill while building it.
MCP Skill Server flips this. It runs on your machine, inside your editor — Claude Code, Cursor, or Claude Desktop. You develop a skill and use it in your real work at the same time. That tight feedback loop (edit → save → use) means you discover what's missing naturally, not through artificial test scenarios. The premise is if the skill doesn't work well with Claude Code, it's unlikely to work with a less sophisticated agent.
How skills mature to survive in the outside world
Claude skills can already have companion scripts, but there's no formalized entry point — the agent decides how to invoke them. That works for local use, but it's not deployable: a production MCP server can't reliably call a skill if the execution path isn't fixed.
MCP Skill Server enforces a declared entry field in your SKILL.md frontmatter (e.g. entry: uv run python my_script.py). This gives you a single, fixed entry point that the server controls. Commands and parameters are discovered from the script's --help output — that's the source of truth, not the LLM's interpretation of your code.
1. Claude/coding agent skill → SKILL.md + scripts, but no fixed entry — agent decides how to run them
2. Local MCP skill (+ entry) → Fixed entry point, schema from --help, usable daily via this server
3. Production → Same skill, same entry — deployed to your enterprise MCP server
Sharpen locally, then harden for production
Every agent that connects to the MCP server gets the same interface — list_skills, get_skill, run_skill — so the skill's description, parameter names, and help text are identical regardless of which agent calls them. That said, different agents have different strengths — a skill that works locally still needs testing with your production agent.
- Use it yourself — build the skill, use it daily via Claude Code or Cursor. Fix descriptions and param names when the agent misuses the skill.
- Test with a weaker model — try a smaller model to surface interface ambiguity.
- Add a deterministic entry point — declare
entryin SKILL.md for reliable, secure execution. Useskill initto scaffold it,skill validateto check readiness. - Test with your production agent — verify end-to-end in your target environment, then deploy.
Install
Claude Desktop (one-click)
After installing, edit the skills path in your Claude Desktop config to point to your skills directory.
Claude Code
claude mcp add skills -- uvx mcp-skill-server serve /path/to/my/skills
Cursor
Add to .cursor/mcp.json in your project (or Settings → MCP → Add Server):
{
"mcpServers": {
"skills": {
"command": "uvx",
"args": ["mcp-skill-server", "serve", "/path/to/my/skills"]
}
}
}
Manual install
# From PyPI (recommended)
uv pip install mcp-skill-server
# Or from source
git clone https://github.com/jcc-ne/mcp-skill-server
cd mcp-skill-server && uv sync
# Run the server
uvx mcp-skill-server serve /path/to/my/skills
Then add to your editor's MCP config:
{
"mcpServers": {
"skills": {
"command": "uvx",
"args": ["mcp-skill-server", "serve", "/path/to/my/skills"]
}
}
}
Creating a Skill
Option A: Use skill init (recommended)
# Create a new skill
uv run mcp-skill-server init ./my_skills/hello -n "hello" -d "A friendly greeting"
# Or use the standalone command
uv run mcp-skill-init ./my_skills/hello -n "hello" -d "A friendly greeting"
# Promote an existing prompt-only Claude skill to a runnable MCP skill
uv run mcp-skill-init ./existing_claude_skill
Option B: Manual setup
1. Create a folder with your script
my_skills/
└── hello/
├── SKILL.md
└── hello.py
2. Add SKILL.md with frontmatter
---
name: hello
description: A friendly greeting skill
entry: uv run python hello.py
---
# Hello Skill
Greets the user by name.
3. Write your script with argparse
# hello.py
import argparse
parser = argparse.ArgumentParser(description="Greeting skill")
parser.add_argument("--name", default="World", help="Name to greet")
args = parser.parse_args()
print(f"Hello, {args.name}!")
That's it. The server auto-discovers commands and parameters from your --help output — no config needed.
Validating for Deployment
When a skill is ready to graduate to production:
uv run mcp-skill-server validate ./my_skills/hello
# or
uv run mcp-skill-validate ./my_skills/hello
Checks:
- Required frontmatter fields (name, description, entry)
- Entry command uses allowed runtime
- Script file exists
- Commands discoverable via
--help
How It Works
MCP Tools
The server exposes four tools to your agent:
| Tool | Description |
|---|---|
list_skills |
List all available skills |
get_skill |
Get details about a skill (commands, parameters) |
run_skill |
Execute a skill with parameters |
refresh_skills |
Reload skills after you make changes |
Schema Discovery
The server automatically discovers your skill's interface by parsing --help output:
# Subcommands become separate commands
subparsers = parser.add_subparsers(dest='command')
analyze = subparsers.add_parser('analyze', help='Run analysis')
# Arguments become parameters with inferred types
analyze.add_argument('--year', type=int, required=True) # int, required
analyze.add_argument('--file', type=str) # string, optional
Output Files
Files saved to output/ are automatically detected. Alternatively, print OUTPUT_FILE:/path/to/file to stdout.
Plugins
Output Handlers
Process files generated by skills (upload, copy, transform, etc.):
from mcp_skill_server.plugins import OutputHandler, LocalOutputHandler
# Default: tracks local file paths
handler = LocalOutputHandler()
# Optional GCS handler (requires `uv sync --extra gcs`)
from mcp_skill_server.plugins import GCSOutputHandler
handler = GCSOutputHandler(
bucket_name="my-bucket",
folder_prefix="skills/outputs/",
)
Response Formatters
Customize how execution results are formatted in MCP tool responses:
from mcp_skill_server.plugins import ResponseFormatter
class CustomFormatter(ResponseFormatter):
def format_execution_result(self, result, skill, command):
return f"Result: {result.stdout}"
# Use with create_server()
from mcp_skill_server import create_server
server = create_server(
"/path/to/skills",
response_formatter=CustomFormatter()
)
Development
git clone https://github.com/jcc-ne/mcp-skill-server
cd mcp-skill-server
uv sync --dev
uv run pytest
uv run mcp-skill-server serve examples/
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mcp_skill_server-0.1.2.tar.gz.
File metadata
- Download URL: mcp_skill_server-0.1.2.tar.gz
- Upload date:
- Size: 86.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7957242c2f07f0d1abe92d1608d040d77b996987c0953d34503a51c8c87233a7
|
|
| MD5 |
febe090ef205a1d5a4157333078d24a1
|
|
| BLAKE2b-256 |
1118e50cbb1fa00d53f841a746ea5b5a53d284ddc2bc92a2a8fc4de3c33eb4c6
|
Provenance
The following attestation bundles were made for mcp_skill_server-0.1.2.tar.gz:
Publisher:
publish.yml on jcc-ne/mcp-skill-server
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mcp_skill_server-0.1.2.tar.gz -
Subject digest:
7957242c2f07f0d1abe92d1608d040d77b996987c0953d34503a51c8c87233a7 - Sigstore transparency entry: 952475479
- Sigstore integration time:
-
Permalink:
jcc-ne/mcp-skill-server@4ce5d4b7402940ac48c5473fde09d95cc7027810 -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/jcc-ne
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@4ce5d4b7402940ac48c5473fde09d95cc7027810 -
Trigger Event:
release
-
Statement type:
File details
Details for the file mcp_skill_server-0.1.2-py3-none-any.whl.
File metadata
- Download URL: mcp_skill_server-0.1.2-py3-none-any.whl
- Upload date:
- Size: 25.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6e7b0ab13884d6b630590b1d35fa6ef98cbf03ab007207a4e4d0de727e737e0a
|
|
| MD5 |
6126d63f39371e5cb6046f1b2bb2c8bb
|
|
| BLAKE2b-256 |
96e8c7e6e47e515f4b7e9c0aa30529272937d0054b9d3e2f661d3788bbdbfbf5
|
Provenance
The following attestation bundles were made for mcp_skill_server-0.1.2-py3-none-any.whl:
Publisher:
publish.yml on jcc-ne/mcp-skill-server
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mcp_skill_server-0.1.2-py3-none-any.whl -
Subject digest:
6e7b0ab13884d6b630590b1d35fa6ef98cbf03ab007207a4e4d0de727e737e0a - Sigstore transparency entry: 952475480
- Sigstore integration time:
-
Permalink:
jcc-ne/mcp-skill-server@4ce5d4b7402940ac48c5473fde09d95cc7027810 -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/jcc-ne
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@4ce5d4b7402940ac48c5473fde09d95cc7027810 -
Trigger Event:
release
-
Statement type: