Skip to main content

Model Context Protocol (MCP) Manager - a tool for managing MCP servers

Project description

MCPMan (MCP Manager)

MCPMan orchestrates interactions between LLMs and Model Context Protocol (MCP) servers, making it easy to create powerful agentic workflows.

Quick Start

Run MCPMan instantly without installing using uvx:

# Use the calculator server to perform math operations
uvx mcpman -c server_configs/calculator_server_mcp.json -i openai -m gpt-4.1-mini -p "What is 1567 * 329 and then divide by 58?"

# Use the datetime server to check time in different timezones
uvx mcpman -c server_configs/datetime_server_mcp.json -i gemini -m gemini-2.0-flash-001 -p "What time is it right now in Tokyo, London, and New York?"

# Use the filesystem server with Ollama for file operations
uvx mcpman -c server_configs/filesystem_server_mcp.json -i ollama -m llama3:8b -p "Create a file called example.txt with a sample Python function, then read it back to me"

# Use the filesystem server with LMStudio's local models
uvx mcpman -c server_configs/filesystem_server_mcp.json -i lmstudio -m qwen2.5-7b-instruct-1m -p "Create a simple JSON file with sample data and read it back to me"

You can also use uv run for quick one-off executions directly from GitHub:

uv run github.com/ericflo/mcpman -c server_configs/calculator_server_mcp.json -i openai -m gpt-4.1-mini -p "What is 256 * 432?"

Core Features

  • One-command setup: Manage and launch MCP servers directly
  • Tool orchestration: Automatically connect LLMs to any MCP-compatible tool
  • Detailed logging: JSON structured logs for every interaction
  • Multiple LLM support: Works with OpenAI, Google Gemini, Ollama, LMStudio and more
  • Flexible configuration: Supports stdio and SSE server communication

Installation

# Install with pip
pip install mcpman

# Install with uv
uv pip install mcpman

# Install from GitHub
uvx pip install git+https://github.com/ericflo/mcpman.git

Basic Usage

mcpman -c <CONFIG_FILE> -i <IMPLEMENTATION> -m <MODEL> -p "<PROMPT>"

Examples:

# Use local models with Ollama for filesystem operations
mcpman -c ./server_configs/filesystem_server_mcp.json \
       -i ollama \
       -m codellama:13b \
       -p "Create a simple bash script that counts files in the current directory and save it as count.sh"

# Use OpenAI with multi-server config
mcpman -c ./server_configs/multi_server_mcp.json \
       -i openai \
       -m gpt-4.1-mini \
       -s "You are a helpful assistant. Use tools effectively." \
       -p "Calculate 753 * 219 and tell me what time it is in Sydney, Australia"

Server Configuration

MCPMan uses JSON configuration files to define the MCP servers. Examples:

Calculator Server:

{
  "mcpServers": {
    "calculator": {
      "command": "python",
      "args": ["-m", "mcp_servers.calculator"],
      "env": {}
    }
  }
}

DateTime Server:

{
  "mcpServers": {
    "datetime": {
      "command": "python",
      "args": ["-m", "mcp_servers.datetime_utils"],
      "env": {}
    }
  }
}

Filesystem Server:

{
  "mcpServers": {
    "filesystem": {
      "command": "python",
      "args": ["-m", "mcp_servers.filesystem_ops"],
      "env": {}
    }
  }
}

Key Options

Option Description
-c, --config <PATH> Path to MCP server config file
-i, --implementation <IMPL> LLM implementation (openai, gemini, ollama, lmstudio)
-m, --model <MODEL> Model name (gpt-4.1-mini, gemini-2.0-flash-001, llama3:8b, qwen2.5-7b-instruct-1m, etc.)
-p, --prompt <PROMPT> User prompt (text or file path)
-s, --system <MESSAGE> Optional system message
--base-url <URL> Custom endpoint URL
--temperature <FLOAT> Sampling temperature (default: 0.7)
--max-tokens <INT> Maximum response tokens
--no-verify Disable task verification

API keys are set via environment variables: OPENAI_API_KEY, GEMINI_API_KEY, etc.

Why MCPMan?

  • Standardized interaction: Unified interface for diverse tools
  • Simplified development: Abstract away LLM-specific tool call formats
  • Debugging support: Detailed JSONL logs for every step in the agent process
  • Local or cloud: Works with both local and cloud-based LLMs

Currently Supported LLMs

  • OpenAI (GPT-4.1, GPT-4.1-mini, GPT-4.1-nano)
  • Google Gemini (gemini-2.0-flash-001, etc.)
  • OpenRouter
  • Ollama (llama3, codellama, etc.)
  • LM Studio (Qwen, Mistral, and other local models)

Development Setup

# Clone and setup
git clone https://github.com/ericflo/mcpman.git
cd mcpman

# Create environment and install deps
uv venv
source .venv/bin/activate  # Linux/macOS
# or .venv\Scripts\activate  # Windows
uv pip install -e ".[dev]"

# Run tests
pytest tests/

Project Structure

  • src/mcpman/: Core source code
  • mcp_servers/: Example MCP servers for testing
  • server_configs/: Example configuration files
  • logs/: Auto-generated structured JSONL logs

License

Licensed under the Apache License 2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcpman-0.1.2.tar.gz (29.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcpman-0.1.2-py3-none-any.whl (31.1 kB view details)

Uploaded Python 3

File details

Details for the file mcpman-0.1.2.tar.gz.

File metadata

  • Download URL: mcpman-0.1.2.tar.gz
  • Upload date:
  • Size: 29.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for mcpman-0.1.2.tar.gz
Algorithm Hash digest
SHA256 a486d2d4944981fae86b2768031a1e6463da5d3088f7cff2a97cb7a070c95666
MD5 9fafc61ec85e439def27df97a3ca154c
BLAKE2b-256 899de5b63977a9f8767f876816905d8b6dde1146192495a55356453ae243d433

See more details on using hashes here.

File details

Details for the file mcpman-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: mcpman-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 31.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for mcpman-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 72b02bbfc9431ebd5724d4585c2771f12644ca3f4e9be378d128a5b94a435ca9
MD5 59e671dd6c78cb02c8f8c6a7b3d098ce
BLAKE2b-256 15123bb067637a96f19f30287822193c6931e2e54ba9cc9a69f9a42988f94b95

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page