Model Context Protocol server for AI CLI agents
Project description
Nexus MCP
An MCP server that enables AI models to invoke AI CLI agents (Gemini CLI, Codex, Claude Code) as tools. Provides parallel execution, automatic retries with exponential backoff, JSON-first response parsing, and structured output through three MCP tools.
Use Cases
Nexus MCP is useful whenever a task benefits from querying multiple AI agents in parallel rather than sequentially:
- Research & summarization — fan out a topic to multiple agents, then synthesize their responses into a single summary with diverse perspectives
- Code review — send different files or review angles (security, correctness, style) to separate agents simultaneously
- Multi-model comparison — prompt the same question to different models and compare outputs side-by-side for quality or consistency
- Bulk content generation — generate multiple test cases, translations, or documentation pages concurrently instead of one at a time
- Second-opinion workflows — get independent answers from separate agents before making a decision, reducing single-model bias
Features
- Parallel execution —
batch_promptfans out tasks withasyncio.gatherand a configurable semaphore (default concurrency: 3) - Automatic retries — exponential backoff with full jitter for transient errors (HTTP 429/503)
- Output handling — JSON-first parsing, brace-depth fallback for noisy stdout, temp-file spillover for outputs exceeding 50 KB
- Execution modes —
default(safe),sandbox(restricted),yolo(full auto-approve) - CLI detection — auto-detects binary path, version, and JSON output capability at startup
- Session preferences — set defaults for execution mode and model once per session; subsequent calls inherit them without repeating parameters
- Tool timeouts — configurable safety timeout (default 15 min) cancels long-running tool calls to prevent the server from blocking indefinitely
- Extensible — implement
build_command+parse_output, register inRunnerFactory
| Agent | Status |
|---|---|
| Gemini CLI | Supported |
| Codex | Supported |
| Claude Code | Planned |
Usage
Note: Currently
geminiandcodexare supported.clauderunner support is planned.
Once nexus-mcp is configured in your MCP client, your AI assistant automatically sees its tools.
The reliable trigger is explicitly asking for output from an external AI agent (e.g. Gemini, Codex).
Generic "do this in parallel" prompts may be handled by the host AI's own capabilities instead.
Because agent is a required parameter, the assistant typically calls list_agents first to discover
what's available, then fans out your request accordingly.
Parameter Reference
batch_prompt
| Parameter | Required | Default | Description |
|---|---|---|---|
tasks |
Yes | — | List of task objects (see below) |
max_concurrency |
No | 3 |
Max parallel agent invocations |
Task object fields:
| Field | Required | Default | Description |
|---|---|---|---|
agent |
Yes | — | Agent name (e.g. "gemini") |
prompt |
Yes | — | Prompt text |
label |
No | auto | Display label for results (auto-assigned from agent name if omitted) |
context |
No | {} |
Optional context metadata dict |
execution_mode |
No | session pref or "default" |
"default", "sandbox", or "yolo" |
model |
No | session pref or CLI default | Model name override |
max_retries |
No | env default | Max retry attempts for transient errors |
prompt
| Parameter | Required | Default | Description |
|---|---|---|---|
agent |
Yes | — | Agent name |
prompt |
Yes | — | Prompt text |
context |
No | {} |
Optional context metadata dict |
execution_mode |
No | session pref or "default" |
"default", "sandbox", or "yolo" |
model |
No | session pref or CLI default | Model name override |
max_retries |
No | env default | Max retry attempts for transient errors |
list_agents
No parameters.
set_preferences
| Parameter | Required | Default | Description |
|---|---|---|---|
execution_mode |
No | — | "default", "sandbox", or "yolo" |
model |
No | — | Model name (e.g. "gemini-2.5-flash") |
clear_execution_mode |
No | false |
Clear execution mode (takes precedence if execution_mode is also provided) |
clear_model |
No | false |
Clear model (takes precedence if model is also provided) |
get_preferences
No parameters. Returns a dict with execution_mode and model keys (null when unset).
clear_preferences
No parameters. Resets all session preferences.
Fan out a research question (batch_prompt)
You say to your AI assistant:
"Get Gemini's perspective on transformer architectures — I want its summary of the Attention Is All You Need paper, its view on the main limitations, and its list of real-world applications beyond NLP."
Your AI assistant first calls list_agents to discover available agents:
{}
Response: ["gemini"]
Then calls batch_prompt with the discovered agent:
{
"tasks": [
{ "agent": "gemini", "prompt": "Summarize the key findings of the Attention Is All You Need paper", "label": "summary" },
{ "agent": "gemini", "prompt": "What are the main limitations of transformer architectures?", "label": "limitations" },
{ "agent": "gemini", "prompt": "List 3 real-world applications of transformers beyond NLP", "label": "applications" }
]
}
Agent discovery happens once per session; subsequent examples skip the list_agents step.
Code review from multiple angles (batch_prompt)
You say to your AI assistant:
"Have Gemini review this diff from three angles in parallel: security vulnerabilities, logic errors, and style issues."
Your AI assistant calls batch_prompt:
{
"tasks": [
{ "agent": "gemini", "prompt": "Review this diff for security vulnerabilities:\n\n<paste diff>", "label": "security" },
{ "agent": "gemini", "prompt": "Review this diff for correctness and logic errors:\n\n<paste diff>", "label": "correctness" },
{ "agent": "gemini", "prompt": "Review this diff for style and maintainability:\n\n<paste diff>", "label": "style" }
]
}
Single-agent prompt (prompt)
You say to your AI assistant:
"Ask Gemini Flash to explain the difference between TCP and UDP in simple terms."
Your AI assistant calls prompt:
{
"agent": "gemini",
"prompt": "Explain the difference between TCP and UDP in simple terms",
"model": "gemini-2.5-flash"
}
Session preferences (set_preferences)
You say to your AI assistant:
"For the rest of this session, use YOLO mode with Gemini Flash — I don't want to repeat those settings on every call."
Your AI assistant calls set_preferences once:
{
"execution_mode": "yolo",
"model": "gemini-2.5-flash"
}
Response:
Preferences set: {"execution_mode": "yolo", "model": "gemini-2.5-flash"}
Subsequent prompt and batch_prompt calls omit those fields — they inherit from the session:
{
"agent": "gemini",
"prompt": "Summarize the latest developments in Rust's async ecosystem"
}
The fallback chain is: explicit parameter → session preference → system default.
To override for one call, pass the parameter directly — it takes precedence without changing the session.
To clear a single preference, use set_preferences with the corresponding clear_* flag (e.g. clear_model: true).
Managing Session Preferences
| Operation | Tool | Notes |
|---|---|---|
| Set one or both fields | set_preferences |
Pass only the fields you want to change |
| Read current values | get_preferences |
Returns {execution_mode, model} with null for unset fields |
| Clear all fields | clear_preferences |
Reverts to per-call defaults |
| Clear one preference | set_preferences with clear_model: true or clear_execution_mode: true |
Other preference is preserved |
MCP Tools
All prompt tools run as background tasks — they return a task ID immediately so the client can poll for results, preventing MCP timeouts for long operations (e.g. YOLO mode: 2–5 minutes).
| Tool | Task? | Description |
|---|---|---|
batch_prompt |
Yes | Fan out prompts to multiple agents in parallel; returns MultiPromptResponse |
prompt |
Yes | Single-agent convenience wrapper; routes to batch_prompt |
list_agents |
No | Returns list of supported agent names |
set_preferences |
No | Set or selectively clear session defaults for execution mode and model |
get_preferences |
No | Retrieve current session preferences |
clear_preferences |
No | Reset all session preferences |
Installation
Run with uvx (recommended)
uvx nexus-mcp
uvx installs the package in an ephemeral virtual environment and runs it — no cloning required.
MCP Client Configuration
Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"nexus-mcp": {
"command": "uvx",
"args": ["nexus-mcp"]
}
}
}
Cursor (.cursor/mcp.json in your project or ~/.cursor/mcp.json globally):
{
"mcpServers": {
"nexus-mcp": {
"command": "uvx",
"args": ["nexus-mcp"]
}
}
}
Claude Code (CLI):
claude mcp add nexus-mcp uvx nexus-mcp
Generic stdio config (any MCP-compatible client):
{
"command": "uvx",
"args": ["nexus-mcp"],
"transport": "stdio"
}
Tip: Pass environment variables (e.g.
NEXUS_GEMINI_MODEL) via your client'senvkey.
Quick Start
Prerequisites
Required:
- Python 3.13+ (download)
- uv dependency manager (install guide)
curl -LsSf https://astral.sh/uv/install.sh | sh uv --version # Verify installation
Optional (for integration tests):
- Gemini CLI v0.6.0+ —
npm install -g @google/gemini-cli - Codex — check with
codex --version - Claude Code — check with
claude --version
Note: Integration tests are optional. Unit tests run without CLI dependencies via subprocess mocking.
Setup for Development
# 1. Clone the repository
git clone <repository-url>
cd nexus-mcp
# 2. Install dependencies
uv sync
# 3. Install pre-commit hooks (runs linting/formatting on commit)
uv run pre-commit install
# 4. Verify installation
uv run pytest # Run tests
uv run mypy src/nexus_mcp # Type checking
uv run ruff check . # Linting
# 5. Run the MCP server
uv run python -m nexus_mcp
Configuration
Global Environment Variables
| Variable | Default | Description |
|---|---|---|
NEXUS_OUTPUT_LIMIT_BYTES |
50000 |
Max output size in bytes before temp-file spillover |
NEXUS_TIMEOUT_SECONDS |
600 |
Subprocess timeout in seconds (10 minutes) |
NEXUS_TOOL_TIMEOUT_SECONDS |
900 |
Tool-level timeout in seconds (15 minutes); set to 0 to disable |
NEXUS_RETRY_MAX_ATTEMPTS |
3 |
Max attempts including the first (set to 1 to disable retries) |
NEXUS_RETRY_BASE_DELAY |
2.0 |
Base seconds for exponential backoff |
NEXUS_RETRY_MAX_DELAY |
60.0 |
Maximum seconds to wait between retries |
Agent-Specific Environment Variables
Pattern: NEXUS_{AGENT}_{KEY} (agent name uppercased)
| Variable | Description |
|---|---|
NEXUS_GEMINI_PATH |
Override Gemini CLI binary path |
NEXUS_GEMINI_MODEL |
Default Gemini model (e.g. gemini-2.5-flash) |
NEXUS_CODEX_PATH |
Override Codex CLI binary path |
NEXUS_CODEX_MODEL |
Default Codex model |
Development Workflow
Adding Dependencies
# Production dependencies
uv add fastmcp pydantic
# Development dependencies
uv add --dev pytest pytest-asyncio mypy ruff
# Sync environment after changes
uv sync
Code Quality
All quality checks run automatically via pre-commit hooks. Run manually:
# Lint and format
uv run ruff check . # Check for issues
uv run ruff check --fix . # Auto-fix issues
uv run ruff format . # Format code
# Type checking (strict mode)
uv run mypy src/nexus_mcp
# Run all pre-commit hooks manually
uv run pre-commit run --all-files
Testing
This project follows Test-Driven Development (TDD) with strict Red→Green→Refactor cycles.
# Run all tests
uv run pytest
# Run with coverage report
uv run pytest --cov=nexus_mcp --cov-report=term-missing
# Run specific test types
uv run pytest -m integration # Integration tests (requires CLIs)
uv run pytest -m "not integration" # Unit tests only
uv run pytest -m "not slow" # Skip slow tests
# Run specific test file
uv run pytest tests/unit/runners/test_gemini.py
Test markers:
@pytest.mark.integration— requires real CLI installations@pytest.mark.slow— tests taking >1 second
Project Structure
nexus-mcp/
├── src/nexus_mcp/
│ ├── __main__.py # Entry point
│ ├── server.py # FastMCP server + tools
│ ├── types.py # Pydantic models
│ ├── exceptions.py # Exception hierarchy
│ ├── config.py # Environment variable config
│ ├── process.py # Subprocess wrapper
│ ├── parser.py # JSON→text fallback parsing
│ ├── cli_detector.py # CLI binary detection + version checks
│ └── runners/
│ ├── base.py # Protocol + ABC
│ ├── factory.py # RunnerFactory
│ └── gemini.py # GeminiRunner
├── tests/
│ ├── unit/ # Fast, mocked tests
│ ├── integration/ # Real CLI tests
│ └── fixtures.py # Shared test utilities
├── .github/
│ └── workflows/ # CI, security, dependabot
├── pyproject.toml # Dependencies + tool config
└── .pre-commit-config.yaml # Git hooks configuration
Common Commands
# Start MCP server
uvx nexus-mcp # Recommended (no clone needed)
uv run python -m nexus_mcp # Development (from cloned repo)
# Run TDD cycle
uv run pytest --cov=nexus_mcp -v
# Code quality checks
uv run ruff check . && uv run ruff format .
uv run mypy src/nexus_mcp
# Pre-commit hooks
uv run pre-commit run --all-files
Python Requirements
- Python 3.13+ required for modern syntax:
typekeyword for type aliases:type AgentName = str- Union syntax:
str | None(notOptional[str]) matchstatements for complex conditionals- NO
from __future__ import annotations
Tool Configuration
- Ruff: line length 100, 17 rule sets (E/F/I/W + UP/FA/B/C4/SIM/RET/ICN/TID/TC/ISC/PTH/TD/NPY) —
pyproject.toml → [tool.ruff] - Mypy: strict mode, all type annotations required —
pyproject.toml → [tool.mypy] - Pytest:
asyncio_mode = "auto", no@pytest.mark.asyncioneeded —pyproject.toml → [tool.pytest.ini_options] - Pre-commit: ruff-check, ruff-format, mypy, trailing-whitespace, end-of-file-fixer —
.pre-commit-config.yaml
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file nexus_mcp-0.4.0.tar.gz.
File metadata
- Download URL: nexus_mcp-0.4.0.tar.gz
- Upload date:
- Size: 152.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9d79a813aac340e865409a363949ac532373beb15ad000aa3d153f54cc935dca
|
|
| MD5 |
ccb632296b78348dfee0d40248c414e3
|
|
| BLAKE2b-256 |
a16bbe66c5ae653a127bdf7f427c0e5f1470e0f7102861c01f2fa228f33b1d00
|
Provenance
The following attestation bundles were made for nexus_mcp-0.4.0.tar.gz:
Publisher:
release.yml on j7an/nexus-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nexus_mcp-0.4.0.tar.gz -
Subject digest:
9d79a813aac340e865409a363949ac532373beb15ad000aa3d153f54cc935dca - Sigstore transparency entry: 1054311673
- Sigstore integration time:
-
Permalink:
j7an/nexus-mcp@96fa661c972cd11a95c896fb5b136df0ee44d5e8 -
Branch / Tag:
refs/tags/v0.4.0 - Owner: https://github.com/j7an
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@96fa661c972cd11a95c896fb5b136df0ee44d5e8 -
Trigger Event:
push
-
Statement type:
File details
Details for the file nexus_mcp-0.4.0-py3-none-any.whl.
File metadata
- Download URL: nexus_mcp-0.4.0-py3-none-any.whl
- Upload date:
- Size: 31.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
55689d18fa04c124efd9ca049c8f23d0ed4db46ff31e614a25fdc95ddc17d8fc
|
|
| MD5 |
50916d53b84084c8e311b00115dcc2d7
|
|
| BLAKE2b-256 |
a62996194a970a9031376d38b6e7b9617a90a645c3b220059debbee204e4259b
|
Provenance
The following attestation bundles were made for nexus_mcp-0.4.0-py3-none-any.whl:
Publisher:
release.yml on j7an/nexus-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nexus_mcp-0.4.0-py3-none-any.whl -
Subject digest:
55689d18fa04c124efd9ca049c8f23d0ed4db46ff31e614a25fdc95ddc17d8fc - Sigstore transparency entry: 1054311788
- Sigstore integration time:
-
Permalink:
j7an/nexus-mcp@96fa661c972cd11a95c896fb5b136df0ee44d5e8 -
Branch / Tag:
refs/tags/v0.4.0 - Owner: https://github.com/j7an
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@96fa661c972cd11a95c896fb5b136df0ee44d5e8 -
Trigger Event:
push
-
Statement type: