Skip to main content

MCP server and library: read/write files, run commands, list files — with strict prompts for user-provided LLMs.

Project description

mcp-agent-tools

Workspace-scoped MCP tools for building Cursor-style agents: read_file, write_file, edit_file, run_command, list_files. Includes strict, versioned system prompts (SYSTEM_PROMPT_V1) and OpenAI-style tool definitions so your app can wire any LLM with one import.

The LLM and API keys stay in your app. This package provides tool execution, sandboxing, and prompts—not a hosted model.

Install

pip install mcp-agent-tools

Editable / dev:

pip install -e ".[dev]"

Tools

All tools are scoped to a single workspace root. Paths are relative to that root (or absolute only if they resolve under it). The same operations are available over MCP (mcp-agent-tools server) and in-process via **AgentWorkspace** / **WorkspaceTools**.

Tool Purpose
**read_file** Read a UTF-8 text file; optional line range and byte cap.
**write_file** Create or overwrite/append UTF-8 text; creates parent directories.
**edit_file** Search-and-replace in an existing UTF-8 file: non-empty old_string, optional replace_all. With replace_all=false, old_string must match exactly once (use surrounding context from read_file for uniqueness). Invalid UTF-8 returns an error instead of corrupting binary data.
**list_files** List directory entries with optional recursion, glob, depth cap, dotfile control.
**run_command** Run a subprocess from an **argv list only** (no shell); optional cwd under the root.

For LLM integrations, tool shapes and descriptions are centralized in **OPENAI_TOOL_DEFINITIONS** and **TOOL_DESCRIPTIONS**; agent behavior is guided by **SYSTEM_PROMPT_V1**.

Tier A — Cursor (or any MCP client)

1. Pick a workspace directory (only paths under this root are allowed).

2. Add a server entry (stdio). Example for a global MCP config (paths use forward slashes on Windows):

{
  "mcpServers": {
    "agent-tools": {
      "command": "mcp-agent-tools",
      "args": [],
      "env": {
        "MCP_AGENT_TOOLS_ROOT": "D:/your/project"
      }
    }
  }
}

Or with an explicit CLI root (overrides env for that process):

{
  "mcpServers": {
    "agent-tools": {
      "command": "mcp-agent-tools",
      "args": ["--root", "D:/your/project"]
    }
  }
}

3. Paste **SYSTEM_PROMPT_V1** (from mcp_agent_tools.prompts or below) into your host’s system prompt if the client does not load server instructions automatically.

Environment variables

Variable Meaning
MCP_AGENT_TOOLS_ROOT Required unless --root is passed. Absolute workspace root.
MCP_AGENT_TOOLS_MAX_READ_BYTES Max bytes per read (default 512000).
MCP_AGENT_TOOLS_COMMAND_TIMEOUT Subprocess timeout in seconds (default 120).
MCP_AGENT_TOOLS_MAX_COMMAND_OUTPUT_BYTES Truncate stdout/stderr combined (default 256000).
MCP_AGENT_TOOLS_LIST_MAX_ENTRIES Cap for list_files (default 2000).
MCP_AGENT_TOOLS_ALLOWED_COMMANDS Comma-separated basenames allowed as argv[0] (e.g. python,uv,node). If unset, all commands allowed under the sandbox.

Tier B — Python app (in-process + OpenAI)

Design notes

  • Workspace root — Examples use D:\Avi-assign as a placeholder; point WORK_DIR at any directory you control.
  • API key policyOPENAI_API_KEY is required only for Chat Completions. Imports set client = OpenAI() if HAS_OPENAI_KEY else None; workspace setup and direct edit_file run without a key.
  • Model-authored I/O — For write_file, persist only text returned by the model. For edit_file, the model must copy **old_string** exactly from **read_file** (see SYSTEM_PROMPT_V1).

1. Install dependencies

In a shell or a notebook cell:

pip install -q openai
pip install -q -e "D:/MCP"   # editable checkout; or: pip install mcp-agent-tools

In Jupyter, the equivalent is %pip install -q openai followed by %pip install -q -e "D:/MCP" (adjust the path to your clone). Do not place shell comments on the same line as %pip.

2. Imports and API key handling

import os
from pathlib import Path

# OPENAI_API_KEY is required only for steps that call Chat Completions (LLM + agent loops).
# Workspace + direct edit_file work without a key.
# Set via OS env or Jupyter: %env OPENAI_API_KEY sk-...
# Local-only optional override — never commit a real key:
# os.environ["OPENAI_API_KEY"] = "sk-..."

from openai import OpenAI

from mcp_agent_tools import (
    AgentWorkspace,
    OPENAI_TOOL_DEFINITIONS,
    SYSTEM_PROMPT_V1,
    run_agent_loop,
)

HAS_OPENAI_KEY = bool(os.environ.get("OPENAI_API_KEY"))
client = OpenAI() if HAS_OPENAI_KEY else None
MODEL = "gpt-4o-mini"

if not HAS_OPENAI_KEY:
    print(
        "Note: OPENAI_API_KEY not set — Chat Completions cells will raise until you set it. "
        "Workspace + direct edit_file still work."
    )

3. Workspace bootstrap and seed file

# Fixed workspace — all reads/writes/commands stay under this folder
WORK_DIR = Path(r"D:\Avi-assign")
WORK_DIR.mkdir(parents=True, exist_ok=True)
print("Workspace:", WORK_DIR.resolve())

hello = WORK_DIR / "hello.txt"
if not hello.exists():
    hello.write_text("Hello from Avi-assign workspace.\n", encoding="utf-8")

ws = AgentWorkspace(WORK_DIR)
print(ws.read_file("hello.txt"))
print("--- list_files ---")
print(ws.list_files(".", recursive=False))

4. LLM-authored file body (no tool calls)

Requires OPENAI_API_KEY. Skip if you are only exercising tools without the API.

if client is None:
    raise ValueError(
        "Set OPENAI_API_KEY to run this cell (Jupyter: %env OPENAI_API_KEY sk-...). "
        "Skip this cell if you only want workspace / edit_file demos."
    )

# 1) Context from disk (read-only)
context = ws.read_file("hello.txt")

# 2) Ask the model to author the entire new file; no static template for the body
user_prompt = (
    "Here is the current contents of hello.txt in my workspace:\n\n"
    f"---\n{context}\n---\n\n"
    "Write ONLY the body of a new Markdown file (no preamble, no code fences) "
    "with a title line and two bullet points explaining what this greeting is for."
)

resp = client.chat.completions.create(
    model=MODEL,
    messages=[
        {
            "role": "system",
            "content": "You output only the file body the user asked for. No extra commentary.",
        },
        {"role": "user", "content": user_prompt},
    ],
)

generated = (resp.choices[0].message.content or "").strip()
if not generated:
    raise RuntimeError("LLM returned empty content; nothing to write.")

# 3) Persist exactly what the LLM produced
out_rel = "llm_generated_notes.md"
ws.write_file(out_rel, generated, mode="overwrite")
print(f"Wrote {out_rel!r} ({len(generated)} chars from model)\n")
print(ws.read_file(out_rel))

5. Direct edit_file (no Chat Completions)

No API key required. The next lines create ws if you have not run the workspace section yet (same root).

# Direct edit_file (no Chat Completions call).
# If you run this before the main workspace cell, the next few lines create `ws` (same root as that cell).
from pathlib import Path

from mcp_agent_tools import AgentWorkspace

if "ws" not in globals():
    WORK_DIR = Path(r"D:\Avi-assign")
    WORK_DIR.mkdir(parents=True, exist_ok=True)
    ws = AgentWorkspace(WORK_DIR)

demo_edit = "notebook_edit_demo.txt"
ws.write_file(
    demo_edit,
    "version: 1\nstatus: draft\nfooter: end\n",
    mode="overwrite",
)
print("--- before ---")
print(ws.read_file(demo_edit), end="")
print(ws.edit_file(demo_edit, old_string="status: draft", new_string="status: ready"))
print("--- after ---")
print(ws.read_file(demo_edit), end="")

6. Agent loop: model calls write_file

Requires OPENAI_API_KEY.

if client is None:
    raise ValueError(
        "Set OPENAI_API_KEY to run this cell. "
        "Skip if you only need workspace or direct edit_file."
    )


def complete(messages, tools):
    """One Chat Completions turn; return OpenAI-shaped dict for run_agent_loop."""
    resp = client.chat.completions.create(
        model=MODEL,
        messages=messages,
        tools=tools,
        tool_choice="auto",
    )
    return resp.model_dump()


answer = run_agent_loop(
    complete,
    "Use tools only. List the workspace root, read hello.txt, then call write_file on "
    "agent_notes.txt. The `content` argument must be your own freshly written summary "
    "(several sentences) based only on what you read—do not paste boilerplate.",
    ws,
    system_prompt=SYSTEM_PROMPT_V1,
    max_turns=12,
)
print("--- final answer ---")
print(answer)
print("--- agent_notes.txt (if created by tool write_file) ---")
p = WORK_DIR / "agent_notes.txt"
print(p.read_text(encoding="utf-8") if p.exists() else "(missing)")

7. Agent loop: model calls edit_file

Requires OPENAI_API_KEY and the complete function from the previous section.

from pathlib import Path

from mcp_agent_tools import AgentWorkspace

if "ws" not in globals():
    WORK_DIR = Path(r"D:\Avi-assign")
    WORK_DIR.mkdir(parents=True, exist_ok=True)
    ws = AgentWorkspace(WORK_DIR)
if "complete" not in globals():
    raise NameError("Run the cell above that defines `complete` (and imports) before this one.")
if client is None:
    raise ValueError(
        "Set OPENAI_API_KEY to run this cell. "
        "The direct edit_file cell above works without a key."
    )

target = "edit_agent_target.txt"
ws.write_file(
    target,
    "# Demo\nThere are three erorrs in this sentance.\n",
    mode="overwrite",
)
edit_answer = run_agent_loop(
    complete,
    (
        f"Use tools only. Read `{target}`. Then use edit_file (not write_file) to fix typos: "
        "change erorrs to errors and sentance to sentence. "
        "Copy old_string exactly from read_file; use two edit_file calls or replace_all where appropriate."
    ),
    ws,
    system_prompt=SYSTEM_PROMPT_V1,
    max_turns=14,
)
print("--- agent (edit_file) answer ---")
print(edit_answer)
print("--- file after agent ---")
print(ws.read_file(target), end="")

8. Optional: custom tool loop without run_agent_loop

Use **OPENAI_TOOL_DEFINITIONS**, call the Chat Completions API with tools=..., parse **tool_calls**, and route each call through **ws.dispatch(name, json.loads(arguments))** (requires import json). For **write_file**, the **content** field should be whatever the model authored; for **edit_file**, pass **old_string**, **new_string**, and **replace_all** exactly as the model returned.

Optional limits on AgentWorkspace

ws = AgentWorkspace(
    r"D:\Avi-assign",
    allowed_commands=frozenset({"python", "uv"}),
    command_timeout_sec=60.0,
)

Lower-level (WorkspaceTools + OPENAI_TOOL_DEFINITIONS)

Same sandbox without AgentWorkspace: use config_from_root(...) and WorkspaceTools. Pass **OPENAI_TOOL_DEFINITIONS** to your provider as tools= when you implement your own loop instead of run_agent_loop.

from mcp_agent_tools import WorkspaceTools, OPENAI_TOOL_DEFINITIONS, SYSTEM_PROMPT_V1
from mcp_agent_tools.config import config_from_root

tools = WorkspaceTools(config_from_root(r"D:\Avi-assign"))
print(tools.read_file("hello.txt"))

Compose the system message:

final_system = SYSTEM_PROMPT_V1 + "\n\n" + "Your org rules here."

Imports reference

  • AgentWorkspace — pass a directory path; use read_file / write_file / edit_file / list_files / run_command on that tree only
  • SYSTEM_PROMPT_V1, SYSTEM_PROMPT_CHANGELOG, TOOL_DESCRIPTIONS
  • OPENAI_TOOL_DEFINITIONS — same shapes as MCP tools (for tools= in chat completions)
  • build_server(config) — build a FastMCP app (stdio via build_server(cfg).run())
  • run_agent_loop — minimal multi-turn executor with your complete callable (accepts WorkspaceConfig or AgentWorkspace)

Safety model

  • Python: all paths are resolved under the directory you passed to AgentWorkspace(...) or config_from_root(...).
  • MCP / CLI: same rule via MCP_AGENT_TOOLS_ROOT or --root (no .. escape).
  • read_file, write_file, and edit_file only touch UTF-8 text paths under that root; edit_file requires valid UTF-8 (strict decode).
  • run_command uses **argv only** (no shell). Optional allowlist via MCP_AGENT_TOOLS_ALLOWED_COMMANDS.
  • Subprocess inherits the current environment; avoid passing secrets you do not want child processes to see.

CLI

mcp-agent-tools --root D:/your/project

Runs the MCP server on stdio (default for Cursor).

Agent loop (conceptual)

  1. System = SYSTEM_PROMPT_V1 (+ optional suffix).
  2. User message + OPENAI_TOOL_DEFINITIONS → your LLM.
  3. For each tool_call, run WorkspaceTools.dispatch (or MCP call_tool) — including read_file, write_file, **edit_file**, list_files, and run_command as defined by the server.
  4. Append tool results; repeat until the model returns text without tools.

run_agent_loop implements steps 2–4 given your complete() function.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_godmode-0.1.5.tar.gz (19.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_godmode-0.1.5-py3-none-any.whl (19.5 kB view details)

Uploaded Python 3

File details

Details for the file agent_godmode-0.1.5.tar.gz.

File metadata

  • Download URL: agent_godmode-0.1.5.tar.gz
  • Upload date:
  • Size: 19.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.5

File hashes

Hashes for agent_godmode-0.1.5.tar.gz
Algorithm Hash digest
SHA256 7bc1a4348d7c194bae15931172b22b2e5f9c02eeab97ef1ff3cf177c6417a298
MD5 18511684bd2532106f1aa09cb4b45b88
BLAKE2b-256 0e67d14217a64811d64e235d1aafc4dc1e5b38cb13051be1ba26e2ea9a2f1009

See more details on using hashes here.

File details

Details for the file agent_godmode-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: agent_godmode-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 19.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.5

File hashes

Hashes for agent_godmode-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 4f982eac9ee209c0c4d686ec11bef037ffc6d7e62a18f06b7e09039157b341f2
MD5 8794ed809f4a73ad69c4aeea363d3f5b
BLAKE2b-256 14fe287938aebe4817e656fdc38b80c9db71a850d73129f50d1231869c484253

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page