Skip to main content

MCP server that stops AI agents from breaking DB schemas and config files. Diff, risk-flag, and require explicit confirmation before any write.

Project description

Safe Migrations MCP

The safety layer the agent ecosystem needs.

Stops Claude Code, Cursor, OpenClaw, and other AI coding agents from quietly breaking your database schema or config files.

Every proposed change is diffed, risk-flagged, and requires a fresh simulation-issued confirmation token before a single byte is written.

An MCP server that gives AI coding agents a safe, auditable, human-in-the-loop way to propose and execute DB schema changes and everyday config edits — the exact class of change that silently corrupts production when an agent gets overconfident.


Why this exists

Agents are great at proposing changes and terrible at understanding the blast radius of those changes. A one-word YAML typo, a helpful DROP COLUMN, a missing WHERE in an UPDATE — any of these can take a project down while the agent cheerfully reports success.

Safe Migrations MCP puts a mandatory checkpoint between the agent and your disk:

  1. Propose — agent sends an intent (natural language or raw SQL, or a new config file); server returns a proposal_id plus a redacted preview and SHA-256 hash of the SQL/edit and its rollback. Full payload is stored server-side, never echoed back.
  2. Simulate — dry-run inside a rolled-back transaction; count affected rows; surface every DROP, TRUNCATE, NOT-NULL-without-default, secret-key removal, etc. On success, returns a one-time confirmation_token bound to the proposal's fingerprint.
  3. Apply — only runs with that fresh confirmation_token. Snapshots the file or DB first. Logs everything to an append-only audit trail.

Local-first. Zero cloud dependency. ~2k LOC of Python, hardened against the usual footguns (symlink writes, silent SQLite creation, MySQL DDL auto-commit, token replay, secret leakage in diffs).

Born from watching an OpenClaw agent break its own config file trying to make a "small" change. The fix is universal: any agent that can edit anything should have to slow down, show its work, and ask first.


Install & run

pip install git+https://github.com/possibly6/safe-migrations-mcp
safe-migrations-mcp                      # speaks MCP over stdio

Or from a local clone:

git clone https://github.com/possibly6/safe-migrations-mcp
cd safe-migrations-mcp
pip install -e '.[all]'
safe-migrations-mcp

Optional extras (Postgres / MySQL drivers):

  • pip install 'git+https://github.com/possibly6/safe-migrations-mcp#egg=safe-migrations-mcp[postgres]' — adds psycopg
  • pip install 'git+https://github.com/possibly6/safe-migrations-mcp#egg=safe-migrations-mcp[mysql]' — adds PyMySQL
  • pip install 'git+https://github.com/possibly6/safe-migrations-mcp#egg=safe-migrations-mcp[all]' — both

SQLite, YAML, JSON, .env, and Prisma/Drizzle schema-file parsing work with zero extra deps.


Wire it into your agent

Claude Code / Claude Desktop

~/.claude/claude_desktop_config.json (or ~/.config/Claude/claude_desktop_config.json on Linux):

{
  "mcpServers": {
    "safe-migrations": {
      "command": "safe-migrations-mcp"
    }
  }
}

Cursor

~/.cursor/mcp.json:

{
  "mcpServers": {
    "safe-migrations": { "command": "safe-migrations-mcp" }
  }
}

OpenClaw / any MCP-capable agent

Point the agent at the safe-migrations-mcp binary on stdio. The tool names match the list below.


The 8 tools

Tool Purpose
inspect_schema(connection) Current DB/ORM schema. Postgres, MySQL, SQLite, plus schema.prisma and Drizzle TS files. Cached 60s.
inspect_config(path) Parse YAML / JSON / .env / Prisma / TOML; return keys and shape.
propose_migration_or_edit(kind, …) Create a proposal from natural language, raw SQL, or a new config file. Returns proposal_id plus redacted previews and hashes. Nothing is written.
simulate_impact(proposal_id) Dry-run the proposal. DB: rolled-back transaction + affected-row count + SQL/rollback previews. Config: unified diff + key delta. Always returns risk flags.
generate_rollback(proposal_id | sql) Exact undo SQL (or the config-snapshot restore path).
apply_change(proposal_id, confirmation) Requires the fresh confirmation_token from simulate_impact. Snapshots first, then executes in a transaction.
get_change_history(limit) Append-only audit log of every applied change.
cleanup_state(max_age_days) Prune old proposal metadata, secret payloads, and snapshots from local state.

Example agent conversation (the confirmation flow)

You: "Add a last_login timestamp column to users in ./demo.db."

Agent (silently): propose_migration_or_edit(kind="db", connection="sqlite:///./demo.db", request="add column last_login of type TEXT to users")

proposal_id = p_9a1b2c3d4e, sql_preview = ALTER TABLE "users" ADD COLUMN "last_login" TEXT;, rollback_preview = ALTER TABLE "users" DROP COLUMN "last_login";

Agent (silently): simulate_impact("p_9a1b2c3d4e")

risk_level = "low", dry_run.ok = true, affected_rows = 0

Agent (to you):

I'd like to run: ALTER TABLE "users" ADD COLUMN "last_login" TEXT; Risk: low. Rollback available. Confirm?

You: "yes, confirm"

Agent (silently): copies the confirmation_token from simulate_impact(...) and calls apply_change("p_9a1b2c3d4e", "<that token>") → done, audit logged.

And the dangerous case:

You: "drop the orders table."

Agent (silently): propose_migration_or_edit(kind="db", ..., sql="DROP TABLE orders;")simulate_impact(...) returns risk_level: "critical", flag: "Drops an entire table (all data lost)".

Agent (to you):

🛑 This is a critical destructive change on 3 rows. Rollback can recreate the table but row data cannot be auto-restored without a backup. Are you sure? Reply with the exact confirmation_token from the simulation step to proceed.

The agent physically cannot execute until that token is passed through — the server enforces it, not the model.


Example prompts you can paste

Claude / Cursor / OpenClaw:

Use the safe-migrations MCP. Inspect sqlite:///./examples/demo.db.
Then propose adding a `phone` TEXT column to users (nullable).
Show me the proposal + simulated risk. Wait for my approval before applying with the confirmation token from simulate_impact.
Use safe-migrations to edit ./examples/config.yaml: set app.log_level to "debug".
Show the diff and risk first. Only apply after I confirm.
Use safe-migrations to audit recent changes — call get_change_history and
summarize the last 10 entries.

Try the demo locally

git clone https://github.com/possibly6/safe-migrations-mcp
cd safe-migrations-mcp
pip install -e .
python examples/seed_demo.py            # creates examples/demo.db
safe-migrations-mcp                     # start the server

Then, from your MCP-connected agent:

  • inspect_schema("sqlite:///./examples/demo.db")
  • inspect_config("./examples/config.yaml")
  • propose_migration_or_edit(kind="db", connection="sqlite:///./examples/demo.db", request="create index on orders(status)")
  • simulate_impact(...)apply_change(..., "<confirmation_token>")

What gets flagged

SQL risk rules (severity):

  • DROP TABLE / DROP DATABASE / DROP SCHEMAcritical
  • DROP COLUMN, TRUNCATE, DELETE/UPDATE without WHEREhigh
  • ALTER COLUMN … TYPE, RENAME TO, ADD COLUMN NOT NULL without DEFAULT, GRANT/REVOKEmedium

Config risk rules:

  • Removal of keys matching database|db|auth|secret|token|production|prod|migrationhigh
  • Adding or changing values on keys matching password|secret|api_key|token|private_key|database_url|dsn|credentialsmedium
  • Any removed key — at least medium

Anything ≥ medium sets confirmation_required: true. High-risk config changes are blocked from direct apply so the agent has to reduce scope or hand the edit back for manual review.


State & audit

All local, all visible:

~/.safe-migrations-mcp/
├── proposals/    # redacted proposal metadata as JSON
├── secrets/      # private proposal payloads (0600), pruned with cleanup_state
├── snapshots/    # pre-change backups
└── audit.jsonl   # append-only log of applied changes (redacted)

Override with SAFE_MIGRATIONS_HOME=/path.


When to use this

Use Safe Migrations MCP when:

  • You let an AI agent touch your database or config files
  • You want every schema change diffed and confirmed before it runs
  • You want an audit trail of every change an agent has ever made
  • You want rollback SQL generated automatically
  • You're tired of agents silently dropping columns or rewriting .env files

Use a real migration framework (Alembic / Prisma Migrate / Flyway) when:

  • You need versioned, repeatable migrations checked into source control
  • You're running zero-downtime migrations on a production Postgres
  • You need MySQL DDL apply support (this server intentionally blocks it — see FAQ)
  • You want online schema changes / backfills / dual-writes

This is a gatekeeper, not a migration framework. They're complementary — author your migrations with Alembic, let agents propose runtime tweaks through this.

In scope: SQLite, Postgres, MySQL inspection / DML; YAML, JSON, .env, Prisma, Drizzle (best-effort). Natural-language intents for common DDL (add/drop/rename column, create index, drop table). Raw SQL pass-through with automatic best-effort rollback.

Out of scope (on purpose): MySQL DDL apply, full SQL dialect coverage, online schema migrations, arbitrary DSL translation. Safety and clarity beat breadth here — if a proposal is outside what we can analyze, we say so instead of guessing.


FAQ

Q: How is this different from Alembic, Prisma Migrate, or Flyway? A: Those are migration frameworks — they track, version, and apply schema changes you author by hand. This is a gatekeeper — it sits between an AI agent and your DB/configs and forces every change through diff → simulate → confirm → apply. Use both.

Q: Can the agent bypass the confirmation token? A: Not from inside this server. The token is generated server-side, bound to a SHA-256 fingerprint of the proposal, and expires in 15 minutes. Editing the proposal invalidates the token. The agent must call simulate_impact to get a fresh one — and the client UI (Claude Code's prompt, Cursor's confirm dialog) is what makes a human actually read the proposal before that token gets passed back.

Q: Does this work in CI? A: It's the wrong tool for CI. The whole point is a human checkpoint. In CI, use your normal migration framework with version-controlled SQL. Reach for this server when an agent (Claude Code, Cursor, OpenClaw) is the one proposing the edit.

Q: What happens if the apply fails halfway through? A: SQLite changes run inside a transaction and roll back on error. Postgres applies via psycopg with explicit commit/rollback. Failed applies are marked apply_failed, not applied, and audited as a separate event so you can find them later.

Q: Why is MySQL DDL apply intentionally blocked? A: MySQL auto-commits DDL — there's no real "dry-run inside a transaction" or rollback. This server refuses to pretend it has a safety net it doesn't have. MySQL DML (INSERT/UPDATE/DELETE) and inspection still work fine.

Q: Does it know about my Postgres triggers, views, or functions? A: Not in v0.1 — inspection covers tables and columns. Triggers/views/functions are out of scope for now.

Q: My agent has a raw write_file tool too. Can't it just bypass the MCP? A: Yes — the safety is opt-in at the agent's tool level. If you wire safe-migrations in and leave a raw filesystem write tool enabled for the same paths, the agent can route around the gate. The intended pattern is: route DB and config edits through this server, and keep raw filesystem writes scoped to other paths.


Development

pip install -e '.[all]'
pip install pytest
pytest -q

License

MIT.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

safe_migrations_mcp-0.1.0.tar.gz (29.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

safe_migrations_mcp-0.1.0-py3-none-any.whl (29.6 kB view details)

Uploaded Python 3

File details

Details for the file safe_migrations_mcp-0.1.0.tar.gz.

File metadata

  • Download URL: safe_migrations_mcp-0.1.0.tar.gz
  • Upload date:
  • Size: 29.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for safe_migrations_mcp-0.1.0.tar.gz
Algorithm Hash digest
SHA256 4c7676364a05099f3dc91a566d3d431cdfa9f141033e78e694fcd78b607ad0e0
MD5 8d7b99423cf09c7e87de168d88f4900d
BLAKE2b-256 09d9712cd4070bf305e0382e2a15a74d3785a93567e7325126985eaf11b9873b

See more details on using hashes here.

File details

Details for the file safe_migrations_mcp-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for safe_migrations_mcp-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 aee4727b1db105bc0e6ccdbea019b8d4376533ae1dfb0d47db8875b7be7bb0fc
MD5 70e19a4f0818a5c13a394603c4ac8210
BLAKE2b-256 67d251bc300bd0646719404739406fef9bef5faf7168bae16051fc99e445e6eb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page