Skip to main content

Safe multi-database access for AI agents.

Project description

faz logo

Safe multi-database access for AI agents.

Quickstart · How it works · Databases · MCP · Docs


AI agents are getting access to databases, APIs, and tools. Nobody's checking what they actually do with that access.

faz sits between your agent and your databases. Every query passes through a 5-stage safety pipeline — RBAC, AST analysis, injection detection, and guardrails — before anything gets executed. Your agent talks to faz. faz talks to your databases. Nothing gets through without being inspected.

                        ┌─────────────────────────┐
Claude, Cursor,         │          faz            │
or any MCP client  ───► │  auth · safety · audit  │ ───►  14 databases
                        └─────────────────────────┘

Quickstart

Install faz and generate a config file:

pip install faz-core
faz init                          # creates faz.yaml + .faz/ directory

Windows: if faz is not recognized, the Python Scripts directory isn't on your PATH. Either install inside a virtual environment (python -m venv venv && venv\Scripts\activate && pip install faz-core) or use the module form: python -m faz init, python -m faz serve, etc. — which works regardless of PATH.

Add a database. The interactive wizard handles connection details per database type:

faz add-database

Or edit faz.yaml directly:

databases:
  - name: your_database_name
    type: postgresql
    host: localhost
    port: 5432
    database: myapp
    username: readonly_user
    password: ${POSTGRES_PASSWORD}
 
permissions:
  # R    = select, explain
  # W    = insert, update, delete
  # RW   = select, explain, insert, update, delete
  # RA   = select, explain, insert
  # RWA  = select, explain, insert, update (no delete)
  # A    = everything including DDL (create, drop, alter, truncate)
  # none = blocked entirely
  postgres:
    baseline: R                   # R = read, RW = read-write, RWA = read-write-append, none = blocked
    tables:
      orders: RW                  # per-table overrides
      audit_log: none

Connect your agent via MCP, or start the REST API:

faz mcp install                   # auto-configures Claude Desktop + Cursor
faz serve                         # REST API on localhost:8787

That's it. Your agent can now query your databases — every query inspected, every action logged, every dangerous operation blocked.

Try it manually

faz query "SELECT * FROM your_table"  # run a query through the safety pipeline

What your agent sees

faz exposes four MCP tools to your agent:

Tool What it does
list_databases Show connected databases and their schemas
describe_table Inspect a specific table's columns and types
query Run a single-database query through the safety pipeline
federated_query Query across multiple databases and merge results

When the agent calls query, it gets back either the results:

{
  "status": "ok",
  "data": { "columns": ["customer_id", "total"], "rows": [...], "row_count": 42 },
  "safety": { "stages_passed": ["PROMPT_GUARD", "RBAC", "AST", "INJECTION", "GUARDRAILS"] }
}

Or a clear explanation of why the query was blocked:

{
  "status": "blocked",
  "error": { "stage": "RBAC", "reason": "table 'salaries' requires READ_WRITE, agent has READ_ONLY" }
}

The agent sees the same contract whether it's connected via MCP or REST — same tools, same safety pipeline, same audit trail.

How it works

Every query goes through 5 stages. Any stage can block the request.

┌─────────────────────────────────────────────────────────────────────┐
│                        faz safety pipeline                          │
│                                                                     │
│  (1) Prompt Guard    catch destructive intent before parsing        │
│  (2) RBAC Gate       per-table read/write/append permissions        │
│  (3) AST Checker     hard-block DDL (DROP, ALTER, TRUNCATE, ...)    │
│  (4) Injection Scan  tautologies, stacked queries, $where, APOC     │
│  (5) Guardrails      row caps, timeouts, query rewriting            │
│                                                                     │
└─────────────────────────────────────────────────────────────────────┘

Stage 1 — Prompt Guard scans the raw request for destructive intent (DROP TABLE, DELETE FROM, INSERT a backdoor) before any parsing happens. Context-aware: "show me deleted records" passes fine.

Stage 2 — RBAC Gate checks per-table permissions. You define a policy matrix in faz.yaml — which databases and tables the agent can read, write, or append to. Supports per-database baselines with per-table overrides. Unauthorized tables are blocked or stripped from federated queries.

Stage 3 — AST Checker parses the query and blocks DDL (CREATE, DROP, ALTER, TRUNCATE, …) for every access level except Admin (A). Defense in depth on top of RBAC: only the explicit A baseline lets DDL through.

Stage 4 — Injection Analyser detects injection patterns per query language: SQL tautologies and stacked statements, MongoDB $where and $function, Cypher APOC abuse, Elasticsearch script injection, and more.

Stage 5 — Guardrails rewrites queries for safety without blocking them. Injects LIMIT clauses, $limit pipeline stages, maxTimeMS timeouts, and size caps so your agent can't accidentally pull a 200M-row table.

Why MCP?

MCP is how agents connect to tools. By implementing faz as an MCP server, your agent doesn't need to know anything about database drivers, connection strings, or query languages. It connects to faz once and gets safe access to every database you've configured.

# Auto-configure Claude Desktop, Cursor, and OpenClaw
faz mcp install

# Just one client
faz mcp install --target claude
faz mcp install --target cursor
faz mcp install --target openclaw

# Preview without writing files
faz mcp install --dry-run

faz mcp install writes the MCP config so your client knows how to spawn faz. After that, your agent can start querying immediately.

OpenClaw — alternative install via the OpenClaw CLI

If you'd rather hand the faz block to OpenClaw's own CLI instead of writing ~/.openclaw/openclaw.json directly, generate a portable config first and pipe it through jq:

# 1. Render the faz mcpServers entry to a standalone file.
faz mcp install --path faz.json

# 2. Register it with OpenClaw using its built-in `mcp set` command.
openclaw mcp set faz "$(jq -c '.mcpServers.faz' faz.json)"

# 3. Confirm the server is registered.
openclaw mcp list

faz mcp install --path faz.json writes the standard {"mcpServers": {"faz": {...}}} envelope, and jq -c '.mcpServers.faz' extracts just the server block — command, args, env — which is the shape OpenClaw's mcp set expects.

faz also exposes a REST API (faz serve on localhost:8787) for non-MCP clients, scripts, and testing. Same pipeline, same audit log, transport: "rest/local" vs "mcp/stdio" in the logs.

Federated queries

Query across multiple databases in a single request. faz resolves dependencies, executes steps in parallel where possible, and merges results with DuckDB:

{
  "steps": [
    {
      "step_id": "s0",
      "database": "postgres",
      "table": "orders",
      "query": "SELECT customer_id, total FROM orders WHERE total > 500"
    },
    {
      "step_id": "s1",
      "database": "mongodb",
      "table": "customers",
      "query": "{\"find\": \"customers\"}",
      "depends_on": ["s0"],
      "link_from": "customer_id",
      "link_to": "_id"
    }
  ],
  "merge": "SELECT s1.name, s0.total FROM s0 JOIN s1 ON s0.customer_id = s1._id"
}

Each step goes through the full safety pipeline independently. If one step is blocked by RBAC, the rest still execute.

Supported databases

Category Databases
Relational PostgreSQL · MySQL · Oracle
Document MongoDB · CouchDB
Search Elasticsearch · OpenSearch
Vector Weaviate · Qdrant · Milvus · Pinecone
Graph Neo4j
Wide-column Cassandra
Cloud DynamoDB

faz speaks each database's native query language — SQL, MQL, Cypher, ES DSL, DynamoDB operations — and the safety pipeline understands each one. Injection detection for Cypher is different from SQL. faz handles both.

Configuration

faz.yaml is the single config file. Generate it with faz init, then edit:

databases:
  - name: postgres
    type: postgresql
    host: localhost
    port: 5432
    database: myapp
    username: readonly_user
    password: ${POSTGRES_PASSWORD}  # env var expansion

  - name: mongo
    type: mongodb
    host: localhost
    port: 27017
    database: analytics

permissions:
  postgres:
    baseline: R                   # default for all tables
    tables:
      orders: RW                  # override for specific tables
      audit_log: none             # block entirely

  mongo:
    baseline: R

safety:
  max_rows_per_query: 1000
  query_timeout_seconds: 30

CLI

faz init                  # generate faz.yaml + .faz/ directory
faz serve                 # start REST API on :8787
faz add-database          # interactive database setup wizard
faz query "SELECT ..."    # run a query through the safety pipeline
faz test                  # exercise safety against configured DBs
faz logs                  # pretty-print / tail the audit log
faz policy                # print the loaded permission tree
faz mcp                   # run the MCP stdio server
faz mcp install           # write Claude Desktop / Cursor / OpenClaw configs

API endpoints

GET  /v1/health                          liveness probe
GET  /v1/databases                       list connected DBs + schemas
GET  /v1/databases/{db}/tables/{table}   single-table schema detail
POST /v1/query/simple                    single-database query
POST /v1/query                           federated multi-step query
GET  /v1/results/{request_id}            paginated result retrieval

Audit logging

Every query — allowed or blocked — is logged as structured JSONL in .faz/audit.jsonl:

{
  "request_id": "a1b2c3",
  "timestamp": "2026-04-30T12:00:00Z",
  "database": "postgres",
  "table": "orders",
  "query": "SELECT ...",
  "stages_passed": ["PROMPT_GUARD", "RBAC", "AST", "INJECTION", "GUARDRAILS"],
  "status": "ok",
  "transport": "rest/local",
  "row_count": 42,
  "execution_time_ms": 23.4
}

Tail live: faz logs --follow. Filter by status: faz logs --status blocked.

Why faz?

The hard part of giving AI agents database access isn't the connector — it's everything around it. Authentication, authorization, injection prevention, row limits, audit trails, and the ability to say "no" to a query that would DROP TABLE users.

Most teams solve this by writing bespoke middleware per database. faz makes it one config file across 14 databases, with safety defaults that are hard to get wrong.

Development

git clone https://github.com/fazhq/faz.git
cd faz
pip install -e ".[dev]"
pytest

License

This project is licensed under the Apache License 2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

faz_core-0.1.0.tar.gz (177.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

faz_core-0.1.0-py3-none-any.whl (198.0 kB view details)

Uploaded Python 3

File details

Details for the file faz_core-0.1.0.tar.gz.

File metadata

  • Download URL: faz_core-0.1.0.tar.gz
  • Upload date:
  • Size: 177.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for faz_core-0.1.0.tar.gz
Algorithm Hash digest
SHA256 7a473fc6e054f5520a010a79fa7e658f02938e2a7cad7cf5d4ef1c7bcc6a87dc
MD5 fce97eb95f64b27a0d340f412e52e43c
BLAKE2b-256 7b7f33e29ff69fc8e781739a3d1d482979ef0e561e16da9a990c5c00f5db5a53

See more details on using hashes here.

File details

Details for the file faz_core-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: faz_core-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 198.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for faz_core-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e67ac0ace29d1ba19be2da25b079b31d38c4d6cdf1e6ee5aed6c976a9498971b
MD5 a3633f51bf8cd774a5b20ab146157f5a
BLAKE2b-256 11ac34e101a9a8edca1dcc27607e6e49825ff25e83eb5f95d4b3a5eeb5e6b1f2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page