Skip to main content

MCP server and library: read/write files, run commands, list files — with strict prompts for user-provided LLMs.

Project description

mcp-agent-tools

Workspace-scoped MCP tools for building Cursor-style agents: read_file, write_file, run_command, list_files. Includes strict, versioned system prompts (SYSTEM_PROMPT_V1) and OpenAI-style tool definitions so your app can wire any LLM with one import.

The LLM and API keys stay in your app. This package provides tool execution, sandboxing, and prompts—not a hosted model.

Install

pip install mcp-agent-tools

Editable / dev:

pip install -e ".[dev]"

Tier A — Cursor (or any MCP client)

1. Pick a workspace directory (only paths under this root are allowed).

2. Add a server entry (stdio). Example for a global MCP config (paths use forward slashes on Windows):

{
  "mcpServers": {
    "agent-tools": {
      "command": "mcp-agent-tools",
      "args": [],
      "env": {
        "MCP_AGENT_TOOLS_ROOT": "D:/your/project"
      }
    }
  }
}

Or with an explicit CLI root (overrides env for that process):

{
  "mcpServers": {
    "agent-tools": {
      "command": "mcp-agent-tools",
      "args": ["--root", "D:/your/project"]
    }
  }
}

3. Paste SYSTEM_PROMPT_V1 (from mcp_agent_tools.prompts or below) into your host’s system prompt if the client does not load server instructions automatically.

Environment variables

Variable Meaning
MCP_AGENT_TOOLS_ROOT Required unless --root is passed. Absolute workspace root.
MCP_AGENT_TOOLS_MAX_READ_BYTES Max bytes per read (default 512000).
MCP_AGENT_TOOLS_COMMAND_TIMEOUT Subprocess timeout in seconds (default 120).
MCP_AGENT_TOOLS_MAX_COMMAND_OUTPUT_BYTES Truncate stdout/stderr combined (default 256000).
MCP_AGENT_TOOLS_LIST_MAX_ENTRIES Cap for list_files (default 2000).
MCP_AGENT_TOOLS_ALLOWED_COMMANDS Comma-separated basenames allowed as argv[0] (e.g. python,uv,node). If unset, all commands allowed under the sandbox.

Tier B — Python app (same tools + prompts in-process)

Easiest: one folder, all tools stay inside it

Pick one directory. Every path you pass is relative to that folder. Commands run with that folder as the default working directory (unless you pass cwd= relative to the same root).

from mcp_agent_tools import AgentWorkspace

# Only files under D:/my-project can be read, written, or listed.
# run_command() defaults to running in this folder.
ws = AgentWorkspace(r"D:\my-project")

ws.write_file("out/hello.txt", "hi\n")
print(ws.read_file("out/hello.txt"))
print(ws.list_files(".", recursive=False))
print(ws.run_command(["python", "-c", "print(1+1)"]))

Optional limits (same as config_from_root):

ws = AgentWorkspace(
    r"D:\my-project",
    allowed_commands=frozenset({"python", "uv"}),
    command_timeout_sec=60.0,
)

Lower-level (same sandbox)

from mcp_agent_tools import (
    WorkspaceConfig,
    WorkspaceTools,
    SYSTEM_PROMPT_V1,
    OPENAI_TOOL_DEFINITIONS,
    run_agent_loop,
)
from mcp_agent_tools.config import config_from_root

cfg = config_from_root("/path/to/project")
tools = WorkspaceTools(cfg)

# Direct calls (no MCP subprocess)
print(tools.read_file("README.md"))

# Optional one-function loop; you implement `complete(messages, tools)` using your LLM SDK
def complete(messages, tool_defs):
    ...  # call your provider; return assistant dict or OpenAI-style {"choices":[{"message":...}]}

answer = run_agent_loop(complete, "Summarize this repo", cfg, system_prompt=SYSTEM_PROMPT_V1)
# Or pass an AgentWorkspace instead of cfg:
# answer = run_agent_loop(complete, "...", ws, system_prompt=SYSTEM_PROMPT_V1)

Compose the system message:

final_system = SYSTEM_PROMPT_V1 + "\n\n" + "Your org rules here."

Imports reference

  • AgentWorkspace — pass a directory path; use read_file / write_file / list_files / run_command on that tree only
  • SYSTEM_PROMPT_V1, SYSTEM_PROMPT_CHANGELOG, TOOL_DESCRIPTIONS
  • OPENAI_TOOL_DEFINITIONS — same shapes as MCP tools (for tools= in chat completions)
  • build_server(config) — build a FastMCP app (stdio via build_server(cfg).run())
  • run_agent_loop — minimal multi-turn executor with your complete callable (accepts WorkspaceConfig or AgentWorkspace)

Safety model

  • Python: all paths are resolved under the directory you passed to AgentWorkspace(...) or config_from_root(...).
  • MCP / CLI: same rule via MCP_AGENT_TOOLS_ROOT or --root (no .. escape).
  • run_command uses argv only (no shell). Optional allowlist via MCP_AGENT_TOOLS_ALLOWED_COMMANDS.
  • Subprocess inherits the current environment; avoid passing secrets you do not want child processes to see.

CLI

mcp-agent-tools --root D:/your/project

Runs the MCP server on stdio (default for Cursor).

Agent loop (conceptual)

  1. System = SYSTEM_PROMPT_V1 (+ optional suffix).
  2. User message + OPENAI_TOOL_DEFINITIONS → your LLM.
  3. For each tool_call, run WorkspaceTools.dispatch (or MCP call_tool).
  4. Append tool results; repeat until the model returns text without tools.

run_agent_loop implements steps 2–4 given your complete() function.

Publishing to PyPI

The distribution name in pyproject.toml is mcp-agent-tools. Before the first release, search https://pypi.org/project/mcp-agent-tools/ — if the name is taken, change name in pyproject.toml (and update pip install ... docs).

  1. Account — Create an account on PyPI (enable 2FA). Optionally practice on TestPyPI.

  2. API token — PyPI → Account settings → API tokens → scope “Entire account” or a token limited to this project after the first upload.

  3. Version — Bump version = "0.1.0" in pyproject.toml for every new release (PEP 440, e.g. 0.1.1).

  4. Build (from the repo root):

    pip install build twine
    python -m build
    

    This creates dist/mcp_agent_tools-<version>-py3-none-any.whl and a .tar.gz.

  5. Checktwine check dist/*

  6. Upload (test first)

    twine upload --repository testpypi dist/*
    

    Install with: pip install -i https://test.pypi.org/simple/ mcp-agent-tools

  7. Upload (production)

    twine upload dist/*
    

    Twine will prompt for the username __token__ and password your PyPI API token.

After publishing, users install with:

pip install mcp-agent-tools

Trusted publishing (GitHub Actions → PyPI without a long-lived token) is described in the PyPI publishing guide.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_godmode-0.1.0.tar.gz (14.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_godmode-0.1.0-py3-none-any.whl (16.2 kB view details)

Uploaded Python 3

File details

Details for the file agent_godmode-0.1.0.tar.gz.

File metadata

  • Download URL: agent_godmode-0.1.0.tar.gz
  • Upload date:
  • Size: 14.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.5

File hashes

Hashes for agent_godmode-0.1.0.tar.gz
Algorithm Hash digest
SHA256 725c7aac27a29a371979914fa6baa7e1272a686f5b297e230f14090fcf71651a
MD5 815a14931cef815d1ee745351004687f
BLAKE2b-256 be164e4b5e84077102d68bfb6ba0bdfeb1f35a8790c8c61c647054526fe5aa79

See more details on using hashes here.

File details

Details for the file agent_godmode-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: agent_godmode-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 16.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.5

File hashes

Hashes for agent_godmode-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e74e664c131bf3a120d6cb06669665a4410063ec8c0c182cb63a427a8aaa7ff0
MD5 cd8af8fa5619f9cdac697d99dd15e44a
BLAKE2b-256 4cb8ecf4368d5a69ad9b5835c2196101b589c56ca8bf00236de6eef43c129b55

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page