Dynamic RAG-powered skills for code assistants via Model Context Protocol
Project description
mcp-skillset
Dynamic RAG-powered skills for code assistants via Model Context Protocol (MCP)
mcp-skillset is a standalone Python application that provides intelligent, context-aware skills to code assistants through hybrid RAG (vector + knowledge graph). Unlike static skills that load at startup, mcp-skillset enables runtime skill discovery, automatic recommendations based on your project's toolchain, and dynamic loading optimized for your workflow.
Key Features
- 🚀 Zero Config:
mcp-skillset setuphandles everything automatically - 🧠 Intelligent: Auto-detects your project's toolchain (Python, TypeScript, Rust, Go, etc.)
- 🔍 Dynamic Discovery: Vector similarity + knowledge graph for better skill finding
- 📦 Multi-Source: Pulls skills from multiple git repositories
- ⚡ On-Demand Loading: Skills loaded when needed, not all at startup
- 🔌 MCP Native: First-class Model Context Protocol integration
Installation
With pipx (Recommended)
pipx is the recommended way to install Python CLI applications:
pipx install mcp-skillset
With pip
If you prefer pip (not recommended for CLI tools):
pip install mcp-skillset
From Source
git clone https://github.com/bobmatnyc/mcp-skillset.git
cd mcp-skillset
pip install -e .
Local Development (Without Installation)
For development, you can run mcp-skillset directly from source without installing:
# Use the development script
./mcp-skillset-dev --help
./mcp-skillset-dev search "python testing"
./mcp-skillset-dev setup --auto
The mcp-skillset-dev script:
- Runs the package from source code (not installed version)
- Uses local virtual environment if available
- Sets up PYTHONPATH automatically
- Passes all arguments through to the CLI
This is useful for:
- Testing changes without reinstalling
- Developing new features
- Debugging with source code
- Contributing to the project
Note: For production use, install the package normally with pip install -e . or pip install mcp-skillset.
First-Run Requirements
Important: On first run, mcp-skillset will automatically download a ~90MB sentence-transformer model (all-MiniLM-L6-v2) for semantic search. This happens during the initial mcp-skillset setup or when you first run any command that requires indexing.
Requirements:
- ✅ Active internet connection
- ✅ ~100MB free disk space
- ✅ 2-5 minutes for initial download (depending on connection speed)
Model Caching:
- Models are cached in
~/.cache/huggingface/for future use - Subsequent runs use the cached model (no download required)
- The cache persists across mcp-skillset updates
Quick Start
1. Setup
Run the interactive setup wizard to configure mcp-skillset for your project:
mcp-skillset setup
Note: The first run will download the embedding model (~90MB) before proceeding with setup. Allow 2-5 minutes for this initial download. Subsequent runs will be much faster.
This will:
- Download embedding model (first run only)
- Detect your project's toolchain
- Clone relevant skill repositories
- Build vector + knowledge graph indices
- Configure MCP server integration
- Validate the setup
2. Start the MCP Server
mcp-skillset serve
The server will start and expose skills to your code assistant via MCP protocol.
3. Use with Claude Code
Skills are automatically available in Claude Code. Try:
- "What testing skills are available for Python?"
- "Show me debugging skills"
- "Recommend skills for my project"
Project Structure
~/.mcp-skillset/
├── config.yaml # User configuration
├── repos/ # Cloned skill repositories
│ ├── anthropics/skills/
│ ├── obra/superpowers/
│ └── custom-repo/
├── indices/ # Vector + KG indices
│ ├── vector_store/
│ └── knowledge_graph/
└── metadata.db # SQLite metadata
Architecture
mcp-skillset uses a hybrid RAG approach combining:
Vector Store (ChromaDB):
- Fast semantic search over skill descriptions
- Embeddings generated with sentence-transformers
- Persistent local storage with minimal configuration
Knowledge Graph (NetworkX):
- Skill relationships and dependencies
- Category and toolchain associations
- Related skill discovery
Toolchain Detection:
- Automatic detection of programming languages
- Framework and build tool identification
- Intelligent skill recommendations
Configuration
Global Configuration (~/.mcp-skillset/config.yaml)
# Hybrid Search Configuration
# Controls weighting between vector similarity and knowledge graph relationships
hybrid_search:
# Option 1: Use a preset (recommended)
preset: current # current, semantic_focused, graph_focused, or balanced
# Option 2: Specify custom weights (must sum to 1.0)
# vector_weight: 0.7 # Weight for vector similarity (0.0-1.0)
# graph_weight: 0.3 # Weight for knowledge graph (0.0-1.0)
repositories:
- url: https://github.com/anthropics/skills.git
priority: 100
auto_update: true
vector_store:
backend: chromadb
embedding_model: all-MiniLM-L6-v2
server:
transport: stdio
log_level: info
Hybrid Search Modes
The hybrid search system combines vector similarity (semantic search) with knowledge graph relationships (dependency traversal) to find relevant skills. You can tune the weighting to optimize for different use cases:
Available Presets:
| Preset | Vector | Graph | Best For | Use Case |
|---|---|---|---|---|
current |
70% | 30% | General purpose (default) | Balanced skill discovery with slight semantic emphasis |
semantic_focused |
90% | 10% | Natural language queries | "help me debug async code" → emphasizes semantic understanding |
graph_focused |
30% | 70% | Related skill discovery | Starting from "pytest" → discovers pytest-fixtures, pytest-mock |
balanced |
50% | 50% | Equal weighting | General purpose when unsure which approach is better |
When to use each mode:
current(default): Best for most users. Proven through testing to work well for typical skill discovery patterns.semantic_focused: Use when you have vague requirements or want fuzzy semantic matching. Good for concept-based searches like "help me with error handling" or "testing strategies".graph_focused: Use when you want to explore skill ecosystems and dependencies. Perfect for "what else works with X?" queries.balanced: Use when you want equal emphasis on both approaches, or as a starting point for experimentation.
Configuration Examples:
# Use preset (recommended)
hybrid_search:
preset: current
# OR specify custom weights
hybrid_search:
vector_weight: 0.8
graph_weight: 0.2
CLI Override:
You can override the config file setting using the --search-mode flag:
# Use semantic-focused mode for this search
mcp-skillset search "python testing" --search-mode semantic_focused
# Use graph-focused mode for recommendations
mcp-skillset recommend --search-mode graph_focused
# Available modes: semantic_focused, graph_focused, balanced, current
Project Configuration (.mcp-skillset.yaml)
project:
name: my-project
toolchain:
primary: Python
frameworks: [Flask, SQLAlchemy]
auto_load:
- systematic-debugging
- test-driven-development
CLI Commands
# Setup and Configuration
mcp-skillset setup # Interactive setup wizard
mcp-skillset config # Show configuration
# Server
mcp-skillset serve # Start MCP server (stdio)
mcp-skillset serve --http # Start HTTP server
mcp-skillset serve --dev # Development mode (auto-reload)
# Skills Management
mcp-skillset search "testing" # Search skills
mcp-skillset list # List all skills
mcp-skillset info pytest-skill # Show skill details
mcp-skillset recommend # Get recommendations
# Repositories
mcp-skillset repo add <url> # Add repository
mcp-skillset repo list # List repositories
mcp-skillset repo update # Update all repositories
# Indexing
mcp-skillset index # Rebuild indices
mcp-skillset index --incremental # Index only new skills
# Utilities
mcp-skillset doctor # System health check
mcp-skillset stats # Usage statistics
Shell Completions
Enable tab completion for the mcp-skillset command to speed up your workflow:
Quick Install
Bash (requires Bash 4.4+):
eval "$(_MCP_SKILLS_COMPLETE=bash_source mcp-skillset)" >> ~/.bashrc
source ~/.bashrc
Zsh (macOS default):
eval "$(_MCP_SKILLS_COMPLETE=zsh_source mcp-skillset)" >> ~/.zshrc
source ~/.zshrc
Fish:
echo 'eval (env _MCP_SKILLS_COMPLETE=fish_source mcp-skillset)' >> ~/.config/fish/config.fish
source ~/.config/fish/config.fish
Features
- ✅ Complete all commands and subcommands
- ✅ Complete option flags (
--help,--limit, etc.) - ✅ Works with
mcp-skillset,mcp-skillset repo, and all other commands
Verification
Test completions are working:
mcp-skillset <TAB> # Shows: config health index info list mcp recommend repo search setup stats
mcp-skillset repo <TAB> # Shows: add list update
mcp-skillset search --<TAB> # Shows: --category --help --limit
Documentation
For detailed installation instructions, troubleshooting, and advanced usage, see docs/SHELL_COMPLETIONS.md.
MCP Tools
mcp-skillset exposes these tools to code assistants:
- search_skills: Natural language skill search
- get_skill: Load full skill instructions by ID
- recommend_skills: Get recommendations for current project
- list_categories: List all skill categories
- update_repositories: Pull latest skills from git
Development
Requirements
- Python 3.11+
- Git
Setup Development Environment
git clone https://github.com/bobmatnyc/mcp-skillset.git
cd mcp-skillset
pip install -e ".[dev]"
Running from Source (Development Mode)
Use the ./mcp-skillset-dev script to run commands directly from source without installation:
# Run any CLI command
./mcp-skillset-dev --version
./mcp-skillset-dev search "debugging"
./mcp-skillset-dev serve --dev
# All arguments pass through
./mcp-skillset-dev info systematic-debugging
How it works:
- Sets
PYTHONPATHto includesrc/directory - Activates local
.venvif present - Runs
python -m mcp_skills.cli.mainwith all arguments
When to use:
- ✅ Rapid iteration during development
- ✅ Testing changes without reinstalling
- ✅ Debugging with source code modifications
- ❌ Production deployments (use
pip installinstead)
Installed vs. Source:
# Installed version (from pip install -e .)
mcp-skillset search "testing"
# Source version (no installation required)
./mcp-skillset-dev search "testing"
Run Tests
make quality
Performance Benchmarks
mcp-skillset includes comprehensive performance benchmarks to track and prevent regressions:
# Run all benchmarks (includes slow tests)
make benchmark
# Run fast benchmarks only (skip 10k skill tests)
make benchmark-fast
# Compare current performance with baseline
make benchmark-compare
Benchmark Categories:
- Indexing Performance: Measure time to index 100, 1000, and 10000 skills
- Search Performance: Track query latency (p50, p95, p99) for vector and hybrid search
- Database Performance: Benchmark SQLite operations (lookup, query, batch insert)
- Memory Usage: Monitor memory consumption during large-scale operations
Baseline Thresholds:
- Index 100 skills: < 10 seconds
- Index 1000 skills: < 100 seconds
- Search query (p50): < 100ms
- Search query (p95): < 500ms
- SQLite lookup by ID: < 1ms
Benchmark Results:
- Results are saved to
.benchmarks/directory (git-ignored) - Use
make benchmark-compareto detect performance regressions - CI/CD can be configured to fail on significant performance degradation
Example Output:
-------------------------- benchmark: 15 tests --------------------------
Name (time in ms) Min Max Mean StdDev
---------------------------------------------------------------------
test_vector_search_latency_100 45.2 52.1 47.8 2.1
test_lookup_by_id_single 0.3 0.8 0.4 0.1
test_hybrid_search_end_to_end 89.5 105.2 94.3 5.2
---------------------------------------------------------------------
Linting and Formatting
make lint-fix
Security Scanning
mcp-skillset includes comprehensive security scanning to identify vulnerabilities in dependencies and code:
Automated Security (Dependabot + GitHub Actions)
Dependabot automatically:
- Scans dependencies weekly for vulnerabilities
- Creates pull requests for security updates
- Groups minor/patch updates for easier review
GitHub Actions runs security scans on every push:
- Safety: Python dependency vulnerability scanner
- pip-audit: PyPI package vulnerability auditor
- Bandit: Python code security linter
- detect-secrets: Secret detection scanner
Manual Security Scanning
# Basic security scan (Safety + pip-audit)
make security-check
# Comprehensive security audit with reports
make security-check-full
# Install security scanning tools
make security-install
# Pre-publish with security checks
make pre-publish
Security Reports
After running make security-check-full, reports are saved to .security-reports/:
safety-report.json- Dependency vulnerabilitiespip-audit-report.json- Package vulnerabilitiesbandit-report.json- Code security issues
Security Policy
For vulnerability reporting and security best practices, see .github/SECURITY.md.
Key security features:
- Automated dependency scanning (Dependabot)
- Weekly security scans (GitHub Actions)
- Pre-publish security gate
- Secret detection (detect-secrets)
- Code security linting (Bandit)
Documentation
Architecture
See docs/architecture/README.md for detailed architecture design.
Skills Collections
See docs/skills/RESOURCES.md for a comprehensive index of skill repositories compatible with mcp-skillset, including:
- Official Anthropic skills
- Community collections (obra/superpowers, claude-mpm-skills, etc.)
- Toolchain-specific skills (Python, TypeScript, Rust, Go, Java)
- Operations & DevOps skills
- MCP servers that provide skill-like capabilities
Troubleshooting
Model Download Issues
If you encounter problems downloading the embedding model on first run:
1. Check Internet Connection
The model is downloaded from HuggingFace Hub. Verify you can reach:
curl -I https://huggingface.co
2. Manual Model Download
Pre-download the model manually if automatic download fails:
python -c "from sentence_transformers import SentenceTransformer; SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')"
This downloads the model to ~/.cache/huggingface/ and verifies it works.
3. Proxy Configuration
If behind a corporate proxy, configure environment variables:
export HTTP_PROXY=http://proxy.example.com:8080
export HTTPS_PROXY=http://proxy.example.com:8080
export HF_ENDPOINT=https://huggingface.co # Or your mirror
4. Offline/Air-Gapped Installation
For environments without internet access:
On a machine with internet:
-
Download the model:
python -c "from sentence_transformers import SentenceTransformer; SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')"
-
Package the model cache:
cd ~/.cache/huggingface tar -czf sentence-transformers-model.tar.gz hub/
On the air-gapped machine:
-
Transfer
sentence-transformers-model.tar.gzto the target machine -
Extract to the HuggingFace cache directory:
mkdir -p ~/.cache/huggingface cd ~/.cache/huggingface tar -xzf /path/to/sentence-transformers-model.tar.gz
-
Install mcp-skillset (transfer wheel if needed):
pip install mcp-skillset # Or install from wheel
-
Verify the setup:
mcp-skillset doctor
5. Custom Cache Location
If you need to use a different cache directory:
export HF_HOME=/custom/path/to/cache
export TRANSFORMERS_CACHE=/custom/path/to/cache
mcp-skillset setup
6. Disk Space Issues
Check available space in the cache directory:
df -h ~/.cache/huggingface
The model requires ~90MB, but allow ~100MB for temporary files during download.
7. Permission Issues
Ensure the cache directory is writable:
mkdir -p ~/.cache/huggingface
chmod 755 ~/.cache/huggingface
Common Issues
"Connection timeout" during model download
- Check internet connection and firewall settings
- Try manual download (see step 2 above)
- Configure proxy if behind corporate network (see step 3 above)
"No space left on device"
- Check disk space:
df -h ~/.cache - Clear old HuggingFace cache:
rm -rf ~/.cache/huggingface/* - Use custom cache location (see step 5 above)
"Permission denied" on cache directory
- Fix permissions:
chmod 755 ~/.cache/huggingface - Or use custom cache location with proper permissions
Slow initial setup
- First run downloads ~90MB and builds indices
- Expected time: 2-10 minutes depending on connection speed and number of skills
- Subsequent runs use cached model and are much faster
Getting Help
If you encounter issues not covered here:
- Check GitHub Issues
- Review logs:
~/.mcp-skillset/logs/ - Run health check:
mcp-skillset doctor - Open a new issue with:
- Error message and stack trace
- Output of
mcp-skillset --version - Operating system and Python version
- Steps to reproduce
Contributing
Contributions welcome! Please read our contributing guidelines first.
- Fork the repository
- Create a feature branch
- Make your changes
- Run
make qualityto ensure tests pass - Submit a pull request
License
MIT License - see LICENSE for details.
Acknowledgments
- Built on the Model Context Protocol
- Inspired by Claude Skills
- Uses ChromaDB for vector search
- Embeddings via sentence-transformers
Links
- PyPI Package: mcp-skillset on PyPI
- Documentation: GitHub Wiki
- Issue Tracker: GitHub Issues
- MCP Registry: MCP Servers
- Publishing Guide: docs/publishing.md
Status: ✅ v0.1.0 - Production Ready | Test Coverage: 85-96% | Tests: 48 passing
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mcp_skillset-0.5.0.tar.gz.
File metadata
- Download URL: mcp_skillset-0.5.0.tar.gz
- Upload date:
- Size: 91.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9932e0f502ba0c805a9949039ff419e948a5b3a2cc6945e5bb011befb3a6b937
|
|
| MD5 |
5eef41435eb1e047fc891bf84575d462
|
|
| BLAKE2b-256 |
31e639789134f4e2b71499eea60a5a272647392913b9b20b9717ff0cda957eef
|
File details
Details for the file mcp_skillset-0.5.0-py3-none-any.whl.
File metadata
- Download URL: mcp_skillset-0.5.0-py3-none-any.whl
- Upload date:
- Size: 96.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fd591708c6c1c1af57dedc6c7652cbc7e9e3ef4647bf181d5acdea1147cc2e0e
|
|
| MD5 |
647096651131a19b450541255a737b34
|
|
| BLAKE2b-256 |
23126bdaa2ec83fa9987456add3c50cd743cd105b736eefb4e6b5880fce1a622
|