MCP server and library: read/write files, run commands, list files — with strict prompts for user-provided LLMs.
Project description
mcp-agent-tools
Workspace-scoped MCP tools for building Cursor-style agents: read_file, write_file, run_command, list_files. Includes strict, versioned system prompts (SYSTEM_PROMPT_V1) and OpenAI-style tool definitions so your app can wire any LLM with one import.
The LLM and API keys stay in your app. This package provides tool execution, sandboxing, and prompts—not a hosted model.
Install
pip install mcp-agent-tools
OpenAI walkthrough (Jupyter): samples/test_openai_agent.ipynb — mirrors the Tier B section below.
Editable / dev:
pip install -e ".[dev]"
Tier A — Cursor (or any MCP client)
1. Pick a workspace directory (only paths under this root are allowed).
2. Add a server entry (stdio). Example for a global MCP config (paths use forward slashes on Windows):
{
"mcpServers": {
"agent-tools": {
"command": "mcp-agent-tools",
"args": [],
"env": {
"MCP_AGENT_TOOLS_ROOT": "D:/your/project"
}
}
}
}
Or with an explicit CLI root (overrides env for that process):
{
"mcpServers": {
"agent-tools": {
"command": "mcp-agent-tools",
"args": ["--root", "D:/your/project"]
}
}
}
3. Paste SYSTEM_PROMPT_V1 (from mcp_agent_tools.prompts or below) into your host’s system prompt if the client does not load server instructions automatically.
Environment variables
| Variable | Meaning |
|---|---|
MCP_AGENT_TOOLS_ROOT |
Required unless --root is passed. Absolute workspace root. |
MCP_AGENT_TOOLS_MAX_READ_BYTES |
Max bytes per read (default 512000). |
MCP_AGENT_TOOLS_COMMAND_TIMEOUT |
Subprocess timeout in seconds (default 120). |
MCP_AGENT_TOOLS_MAX_COMMAND_OUTPUT_BYTES |
Truncate stdout/stderr combined (default 256000). |
MCP_AGENT_TOOLS_LIST_MAX_ENTRIES |
Cap for list_files (default 2000). |
MCP_AGENT_TOOLS_ALLOWED_COMMANDS |
Comma-separated basenames allowed as argv[0] (e.g. python,uv,node). If unset, all commands allowed under the sandbox. |
Tier B — Python app (same tools + prompts in-process)
A full runnable version of this flow (including pip cells) lives in samples/test_openai_agent.ipynb. Below is the same logic as plain Python you can paste into a script.
Ideas in the example
- One workspace root (here
D:\Avi-assign— change it to your folder).AgentWorkspaceonly allows paths under that root. - LLM → disk: read real files for context → call OpenAI → pass
message.contentonly intowrite_file(no hand-written file body). - Agent + tools:
run_agent_loop+tools=so the model callsread_file/write_fileitself; prompts insist thecontentargument ofwrite_fileis model-authored text.
Install
pip install mcp-agent-tools openai
Setup: workspace + seed file
from pathlib import Path
from openai import OpenAI
from mcp_agent_tools import AgentWorkspace, SYSTEM_PROMPT_V1, run_agent_loop
# Set OPENAI_API_KEY in your environment before running (recommended).
# For local experiments only: os.environ["OPENAI_API_KEY"] = "sk-..." # do not commit real keys
client = OpenAI()
MODEL = "gpt-4o-mini"
WORK_DIR = Path(r"D:\Avi-assign") # change to your workspace
WORK_DIR.mkdir(parents=True, exist_ok=True)
hello = WORK_DIR / "hello.txt"
if not hello.exists():
hello.write_text("Hello from Avi-assign workspace.\n", encoding="utf-8")
ws = AgentWorkspace(WORK_DIR)
print(ws.read_file("hello.txt"))
print("--- list_files (first 25 lines) ---")
_listing = ws.list_files(".", recursive=False).strip().splitlines()
print("\n".join(_listing[:25]) + ("\n..." if len(_listing) > 25 else ""))
Example A — file body is only what the LLM returns (no tool calls)
context = ws.read_file("hello.txt")
user_prompt = (
"Here is the current contents of hello.txt in my workspace:\n\n"
f"---\n{context}\n---\n\n"
"Write ONLY the body of a new Markdown file (no preamble, no code fences) "
"with a title line and two bullet points explaining what this greeting is for."
)
resp = client.chat.completions.create(
model=MODEL,
messages=[
{
"role": "system",
"content": "You output only the file body the user asked for. No extra commentary.",
},
{"role": "user", "content": user_prompt},
],
)
generated = (resp.choices[0].message.content or "").strip()
if not generated:
raise RuntimeError("LLM returned empty content; nothing to write.")
ws.write_file("llm_generated_notes.md", generated, mode="overwrite")
print(ws.read_file("llm_generated_notes.md"))
Example B — agent loop: model calls tools and writes agent_notes.txt
def complete(messages, tools):
resp = client.chat.completions.create(
model=MODEL,
messages=messages,
tools=tools,
tool_choice="auto",
)
return resp.model_dump()
answer = run_agent_loop(
complete,
"Use tools only. List the workspace root, read hello.txt, then call write_file on "
"agent_notes.txt. The `content` argument must be your own freshly written summary "
"(several sentences) based only on what you read—do not paste boilerplate.",
ws,
system_prompt=SYSTEM_PROMPT_V1,
max_turns=12,
)
print(answer)
if (WORK_DIR / "agent_notes.txt").exists():
print(ws.read_file("agent_notes.txt"))
Optional limits on AgentWorkspace
ws = AgentWorkspace(
r"D:\Avi-assign",
allowed_commands=frozenset({"python", "uv"}),
command_timeout_sec=60.0,
)
Lower-level (WorkspaceTools + OPENAI_TOOL_DEFINITIONS)
Same sandbox without AgentWorkspace: use config_from_root(...) and WorkspaceTools. Pass OPENAI_TOOL_DEFINITIONS to your provider as tools= when you implement your own loop instead of run_agent_loop.
from mcp_agent_tools import WorkspaceTools, OPENAI_TOOL_DEFINITIONS, SYSTEM_PROMPT_V1
from mcp_agent_tools.config import config_from_root
tools = WorkspaceTools(config_from_root(r"D:\Avi-assign"))
print(tools.read_file("hello.txt"))
Compose the system message:
final_system = SYSTEM_PROMPT_V1 + "\n\n" + "Your org rules here."
Imports reference
AgentWorkspace— pass a directory path; useread_file/write_file/list_files/run_commandon that tree onlySYSTEM_PROMPT_V1,SYSTEM_PROMPT_CHANGELOG,TOOL_DESCRIPTIONSOPENAI_TOOL_DEFINITIONS— same shapes as MCP tools (fortools=in chat completions)build_server(config)— build aFastMCPapp (stdio viabuild_server(cfg).run())run_agent_loop— minimal multi-turn executor with yourcompletecallable (acceptsWorkspaceConfigorAgentWorkspace)
Safety model
- Python: all paths are resolved under the directory you passed to
AgentWorkspace(...)orconfig_from_root(...). - MCP / CLI: same rule via
MCP_AGENT_TOOLS_ROOTor--root(no..escape). run_commandusesargvonly (no shell). Optional allowlist viaMCP_AGENT_TOOLS_ALLOWED_COMMANDS.- Subprocess inherits the current environment; avoid passing secrets you do not want child processes to see.
CLI
mcp-agent-tools --root D:/your/project
Runs the MCP server on stdio (default for Cursor).
Agent loop (conceptual)
- System =
SYSTEM_PROMPT_V1(+ optional suffix). - User message +
OPENAI_TOOL_DEFINITIONS→ your LLM. - For each
tool_call, runWorkspaceTools.dispatch(or MCPcall_tool). - Append tool results; repeat until the model returns text without tools.
run_agent_loop implements steps 2–4 given your complete() function.
Publishing to PyPI
The distribution name in pyproject.toml is mcp-agent-tools. Before the first release, search https://pypi.org/project/mcp-agent-tools/ — if the name is taken, change name in pyproject.toml (and update pip install ... docs).
-
Account — Create an account on PyPI (enable 2FA). Optionally practice on TestPyPI.
-
API token — PyPI → Account settings → API tokens → scope “Entire account” or a token limited to this project after the first upload.
-
Version — Bump
version = "0.1.0"inpyproject.tomlfor every new release (PEP 440, e.g.0.1.1). -
Build (from the repo root):
pip install build twine python -m build
This creates
dist/mcp_agent_tools-<version>-py3-none-any.whland a.tar.gz. -
Check —
twine check dist/* -
Upload (test first) —
twine upload --repository testpypi dist/*
Install with:
pip install -i https://test.pypi.org/simple/ mcp-agent-tools -
Upload (production) —
twine upload dist/*
Twine will prompt for the username
__token__and password your PyPI API token.
After publishing, users install with:
pip install mcp-agent-tools
Trusted publishing (GitHub Actions → PyPI without a long-lived token) is described in the PyPI publishing guide.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_godmode-0.1.1.tar.gz.
File metadata
- Download URL: agent_godmode-0.1.1.tar.gz
- Upload date:
- Size: 14.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4e34d909019dcfc905a0dc71e6dbfb1bd2272f32a2f2e1f7c46c9916d5563fba
|
|
| MD5 |
f066daae5ace1e88ff5e49004fb30e4c
|
|
| BLAKE2b-256 |
eca6f7d8ecfa7806e76a3e563c570ace0978f76f3b3988afd427ebe13133737c
|
File details
Details for the file agent_godmode-0.1.1-py3-none-any.whl.
File metadata
- Download URL: agent_godmode-0.1.1-py3-none-any.whl
- Upload date:
- Size: 17.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0664937bc987b08d0e747beb019d545c789826891fa87687d550860fd22961d7
|
|
| MD5 |
b4d54a55e4039591c07ab3531380c41e
|
|
| BLAKE2b-256 |
2088679156e775f62e2f2ced9ff6474a106054908d7e62c15502fd54af87653e
|