Skip to main content

Consent Graph as Code: deterministic action governance for AI agents

Project description

consentgraph

AI agents break trust when they act without permission. Not because they're malicious -- because the authorization boundary was never made explicit. consentgraph gives you a simple, auditable way to define exactly what your agent can do autonomously, what requires human approval, and what is permanently off-limits -- in a single JSON file that travels with your deployment.

from consentgraph import check_consent, ConsentGraphConfig

config = ConsentGraphConfig(graph_path="./consent-graph.json")
tier = check_consent("email", "send", confidence=0.9, config=config)
# → "VISIBLE"  (execute, then notify operator)

Install

pip install consentgraph
# With MCP server support:
pip install "consentgraph[mcp]"

The 4-Tier Model

Every action resolves to exactly one tier. The engine checks lists in priority order: blocked → autonomous → requires_approval → unlisted.

Tier What it means Agent behavior
SILENT Pre-approved. Operator trusts this unconditionally. Execute. Log it. No notification.
VISIBLE Allowed at high confidence (≥ 0.85). Execute, then notify operator: "I did X because Y."
FORCED Allowed but needs explicit approval. Stop. Surface to operator. Wait for response.
BLOCKED Absolute prohibition. Never execute. Refuse. Alert operator that the attempt was made.

The confidence parameter is the agent's self-reported confidence that the action matches operator intent. High confidence on a requires_approval action yields VISIBLE; low confidence yields FORCED. Blocked actions are always blocked, regardless of confidence.


consent-graph.json

Define your domains and actions in a single JSON file:

{
  "domains": {
    "email": {
      "autonomous": ["read", "archive_promo"],
      "requires_approval": ["send", "reply"],
      "blocked": ["delete_all", "bulk_send"],
      "trust_level": "high"
    },
    "filesystem": {
      "autonomous": ["read", "list"],
      "requires_approval": ["write", "create"],
      "blocked": ["delete", "format"],
      "trust_level": "low"
    }
  },
  "consent_decay": {
    "enabled": true,
    "review_interval_days": 30
  }
}

See examples/consent-graph.example.json for a full 5-domain example with design rationale notes.

Note on JSON comments: JSON doesn't support comments natively. The example file uses "_design_note" keys for inline documentation -- ConsentGraph ignores unknown keys.


Python API

from consentgraph import check_consent, log_override, ConsentGraphConfig

# Configure once
config = ConsentGraphConfig(
    graph_path="./consent-graph.json",
    log_dir="./logs/",
    confidence_threshold=0.85,  # default
)

# Check before any external action
tier = check_consent("calendar", "create_event", confidence=0.9, config=config)

if tier == "BLOCKED":
    raise PermissionError("Action blocked by consent graph")
elif tier == "FORCED":
    # surface approval UI to operator, await response
    ...
elif tier == "VISIBLE":
    do_action()
    notify_operator("Created calendar event because user requested it.")
else:  # SILENT
    do_action()

# Log when a human overrides a consent decision
log_override(
    domain="calendar",
    action="create_event",
    reason="Operator approved via Slack button",
    operator_decision="approved",
    config=config,
)

ConsentGraphConfig defaults

Parameter Default Description
graph_path ~/.consentgraph/consent-graph.json Path to consent graph
log_dir ~/.consentgraph/logs/ Audit log directory
confidence_threshold 0.85 Min confidence for VISIBLE vs FORCED

CLI

# Create a starter consent-graph.json
consentgraph init

# Check consent for an action
consentgraph check email send --confidence 0.9

# Print graph summary
consentgraph summary

# Validate graph schema
consentgraph validate

# Check if graph needs review (decay)
consentgraph decay

# Analyze override patterns
consentgraph overrides

# Override graph location
consentgraph --graph /path/to/consent-graph.json summary

MCP Server

ConsentGraph ships an MCP server that exposes check_consent as a native tool. Any MCP-compatible framework (LangChain, CrewAI, Claude Desktop, custom) can call it.

Start the server:

consentgraph mcp
# or
python -m "consentgraph.mcp_server

MCP client config (e.g. Claude Desktop):

{
  "mcpServers": {
    "consentgraph": {
      "command": "consentgraph",
      "args": ["mcp"],
      "env": {
        "CONSENTGRAPH_GRAPH_PATH": "/path/to/consent-graph.json"
      }
    }
  }
}

Tool: check_consent

Input:

{
  "domain": "email",
  "action": "send",
  "confidence": 0.9
}

Output:

{
  "tier": "VISIBLE",
  "domain": "email",
  "action": "send",
  "confidence": 0.9,
  "guidance": "Proceed, then notify the operator."
}

Schema Validation

from consentgraph import validate_graph
import json

with open("consent-graph.json") as f:
    raw = json.load(f)

graph = validate_graph(raw)  # raises pydantic.ValidationError if invalid
print(f"{len(graph.domains)} domains configured")

Or via CLI:

consentgraph validate

Audit Trail

Every consent check is logged to {log_dir}/consent-attempts.jsonl. Every human override is logged to {log_dir}/consent-overrides.jsonl.

{"timestamp": "2025-01-15T14:23:01", "domain": "email", "action": "send", "confidence": 0.9, "tier": "VISIBLE", "reason": "high_confidence_approval"}

Use the override log to identify patterns and refine your graph:

consentgraph overrides
# → "email/send: approved 5x → consider upgrading to autonomous"

Why This Matters

Enterprise and government deployments of AI agents face a common failure mode: the authorization boundary is implicit, embedded in prompts or agent code, invisible to auditors, and impossible to update without a code deploy. When something goes wrong -- and it will -- there's no audit trail and no clear policy to point to.

ConsentGraph makes the boundary explicit, version-controlled, human-readable, and independently auditable. The consent graph is the policy. The audit log is the evidence. The override log is the feedback loop that improves the policy over time.

For regulated industries, this pattern maps directly onto existing control frameworks (SOC 2 access controls, FedRAMP least-privilege, NIST AI RMF). The consent graph is a machine-readable policy artifact that compliance teams can review without reading agent code.


Project Status

v0.1.0 -- production logic extracted and packaged. API is stable. MCP server is functional. Breaking changes will be versioned.

Roadmap:

  • Async support for check_consent
  • Time-window constraints (e.g., "only during business hours")
  • Graph inheritance (base + environment overrides)
  • Web UI for graph editing

Contributions welcome.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

consentgraph-0.1.0.tar.gz (15.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

consentgraph-0.1.0-py3-none-any.whl (15.5 kB view details)

Uploaded Python 3

File details

Details for the file consentgraph-0.1.0.tar.gz.

File metadata

  • Download URL: consentgraph-0.1.0.tar.gz
  • Upload date:
  • Size: 15.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for consentgraph-0.1.0.tar.gz
Algorithm Hash digest
SHA256 59cc52b72fc7a70e800f4c04af35ee7fb2505aedd63fe868aa2e56e9c05ca5f8
MD5 cdd407a40b75d81f0926aa49e76c32be
BLAKE2b-256 8c0065e4dc10a2ee5530ec7261a4495d24f14fbe3d8ca4b7104998d1264fc478

See more details on using hashes here.

File details

Details for the file consentgraph-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: consentgraph-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 15.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for consentgraph-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b6cb43b9c4ab5ac926a2b4b81fca3827139da47e5fac575d3169e46be2463d64
MD5 9f7575c4b5228f2b1f3c5291e9c57f22
BLAKE2b-256 057d9db8dcd7aaf6aad1b994009667311c171aa084b6fe160bf36f1adb482f71

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page