Skip to main content

In-A-Lign Agent Provenance & Security MCP Server

Project description

InALign

Tamper-proof audit trails for AI agents

Know what your AI agents did. Prove it. Cryptographically.

PyPI License Python


Zero Trust. Zero Cloud. Zero Telemetry.

InALign is fully decentralized by design. There is no InALign server. No account. No telemetry. Nothing leaves your machine — ever.

Other audit tools InALign
Where data lives Their cloud Your machine only
Account required Yes No
Telemetry "Anonymous" usage data Zero. Not a single byte.
Paid features Require their servers Run 100% locally with your own API key
What they see Your agent's actions Nothing. We can't see anything even if we wanted to.

Even Pro features like the AI Security Analyzer use your own LLM API key and run entirely on your machine. Your data never touches our infrastructure because we don't have infrastructure.

The Problem

AI coding agents (Claude Code, Cursor, Copilot) can read, write, and execute anything on your machine. When something goes wrong:

  • What did the agent actually do?
  • Who told it to do that?
  • Can you prove it?

Logs can be edited. Memory fades. You need evidence that cannot be tampered with.

Why Not Just Use Logs?

Traditional Logs InALign
Tamper resistance None. Anyone with access can edit. SHA-256 hash chain + Ed25519 signatures. Modify one record -> chain breaks. Replace the DB -> signature check fails.
Provenance "Something happened at 3pm" Who commanded it, what the agent did, full causal chain
Risk detection Manual review Automatic: data exfiltration, privilege escalation, suspicious patterns
Guardrails After the fact Runtime policy engine blocks dangerous actions
Audit proof "Trust me" Third-party verifiable cryptographic proof

Quick Start

pip install inalign-mcp && inalign-install --local

Restart Claude Code. Done. Every agent action is now recorded in a local SQLite database with SHA-256 hash chains.

That's it. No API key. No account. No cloud. No telemetry. Everything runs on your machine and stays on your machine.

Data is stored at ~/.inalign/provenance.db. Persists across sessions. Nothing is ever sent anywhere.

Manual setup (without install script)
pip install inalign-mcp

Add to ~/.claude/settings.json:

{
  "mcpServers": {
    "inalign": {
      "command": "python",
      "args": ["-m", "inalign_mcp.server"]
    }
  }
}

CLI Commands

InALign provides four CLI commands:

inalign-install — Setup & Configuration

inalign-install --local              # Install with SQLite (recommended)
inalign-install --license KEY        # Install with Pro/Enterprise license
inalign-install --activate KEY       # Activate or update a license key
inalign-install --status             # Show current license status
inalign-install --uninstall          # Remove InALign configuration

inalign-report — Interactive Dashboard

inalign-report                       # Open dashboard in browser (port 8275)
inalign-report --port 9000           # Custom port
inalign-report --no-open             # Start server without opening browser

Opens a 4-tab interactive dashboard. See Report Dashboard below.

inalign-ingest — Session Log Parser

inalign-ingest --latest --save       # Parse most recent session, save compressed
inalign-ingest path/to/session.jsonl # Parse specific session file
inalign-ingest --dir ~/.claude/projects  # Find all sessions in directory
inalign-ingest --latest -o report.html   # Generate HTML report
inalign-ingest --latest --json           # Output JSON summary to stdout

Parses Claude Code session logs (.jsonl) and saves compressed session data to ~/.inalign/sessions/ for use in the dashboard and AI analysis.

inalign-analyze — AI Security Analysis (Pro)

inalign-analyze --api-key sk-ant-xxx --latest --save     # Analyze with Claude API
inalign-analyze --api-key sk-xxx --provider openai --latest  # Analyze with OpenAI
inalign-analyze --latest --api-key KEY --max-records 50  # Limit records (for API rate limits)
inalign-analyze --latest --api-key KEY --json            # Raw JSON output

Deep security analysis powered by your own LLM API key. See AI Security Analyzer below.

What You Get

16 MCP Tools, Zero Configuration

Once installed, your AI agent automatically gains:

Category Tools What it does
Provenance record_action, record_user_command, get_provenance, verify_provenance Cryptographic audit trail for every action
Audit generate_audit_report, verify_third_party, export_report Compliance reports, HTML export, third-party verifiable proof
Risk analyze_risk, get_behavior_profile, get_agent_risk, get_user_risk, list_agents_risk Pattern detection: data exfiltration, privilege escalation, suspicious tool chains
Policy get_policy, set_policy, list_policies, simulate_policy Runtime guardrails with 3 presets
Sessions list_sessions Browse past audit sessions

How the Hash Chain Works

Every agent action is recorded as a provenance record with a SHA-256 hash that includes the previous record's hash:

Record #1 ──hash──> Record #2 ──hash──> Record #3
   |                    |                    |
   +-- user_command     +-- file_write       +-- tool_call
       sha256: a1b2c3       sha256: d4e5f6       sha256: g7h8i9
       prev:   000000       prev:   a1b2c3       prev:   d4e5f6

Each record's hash is computed from: action_type + action_name + timestamp + activity_attributes + previous_hash. Modify any record? Its hash changes. The next record's prev no longer matches. Chain broken. Tamper detected.

This is the same principle behind Git commits and blockchains — except applied to AI agent actions.

Verification methods:

  • verify_provenance — Checks the entire hash chain for integrity
  • verify_third_party — Generates a self-contained proof package that anyone can independently verify without trusting InALign
  • Merkle Root — Session-level summary hash for efficient batch verification

Current status & roadmap:

  • SHA-256 hash chains with local SQLite storage (shipping now)
  • Ed25519 digital signatures (shipping now)
  • Blockchain anchoring for additional tamper evidence (planned)

Ed25519 Digital Signatures

Every record is automatically signed with a machine-local Ed25519 private key. This adds non-repudiation on top of the hash chain:

Attack Hash chain only Hash chain + signatures
Modify a single record Detected Detected
Replace entire database Not detected Detected — attacker doesn't have the private key
Prove which machine created it Cannot Can — signature ties record to a specific keypair

How it works:

  1. On first run, a keypair is generated at ~/.inalign/signing_key (private) and ~/.inalign/signing_key.pub (public)
  2. Every provenance record is signed: Ed25519(private_key, record_hash) -> 64-byte signature
  3. verify_provenance checks both hash chain integrity AND signature validity
  4. verify_third_party exports the public key so anyone can independently verify signatures

Zero configuration required. If the cryptography library is installed (it usually is), signing happens automatically. If not, records are still hash-chained — just unsigned.

# Enable signing (if not already installed)
pip install cryptography

Risk Analysis

Pattern detection catches:

  • Data exfiltration — reading secrets then making network calls
  • Privilege escalation — unusual permission patterns
  • Suspicious tool chains — uncommon sequences of actions
  • Anomalous behavior — deviations from baseline patterns

Policy Engine

Three presets, switchable at runtime:

Preset Use case
STRICT_ENTERPRISE Production, regulated environments
BALANCED Default, everyday development
DEV_SANDBOX Experimentation, permissive

Simulate before deploying:

simulate_policy("STRICT_ENTERPRISE")
-> 12 actions would be blocked, 3 masked, 47 allowed

Report Dashboard

Run inalign-report to open an interactive dashboard with 4 tabs:

Tab What it shows
Overview Session summary, record counts, verification status, risk score
Provenance Chain Full hash chain with timestamps, action types, and hash values
Session Log Complete conversation history from Claude Code sessions
AI Analysis Deep security analysis results (requires Pro + API key)

The dashboard includes JSON/CSV export for all data. Session logs are loaded from ~/.inalign/sessions/ (use inalign-ingest --latest --save to populate).

AI Security Analyzer (Pro)

Deep LLM-powered security analysis of agent sessions. Uses your own API key — data goes directly from your machine to your LLM provider. InALign never sees it, because there is no InALign server to see it.

How it works:

  1. Reads your session data locally
  2. Masks PII (API keys, passwords, emails, SSH keys, JWTs — 14 patterns)
  3. Sends masked data to your chosen LLM provider for analysis
  4. Returns risk score, findings, and recommendations

Supported providers:

  • Claude API (Anthropic) — auto-detected from sk-ant-* keys
  • OpenAI API (GPT-4o) — auto-detected from sk-* keys

Analysis includes:

  • Causal chain analysis (user_prompt -> thinking -> tool_call -> tool_result)
  • Risk scoring (0-100 with LOW/MEDIUM/HIGH/CRITICAL levels)
  • Specific security findings with evidence
  • Actionable recommendations
inalign-analyze --api-key YOUR_KEY --latest --save

Reports are saved to ~/.inalign/analysis/.

Supported Agents

Works with any agent that supports MCP (Model Context Protocol):

Agent Status
Claude Code Fully tested
Cursor MCP compatible
Windsurf MCP compatible
Continue.dev MCP compatible
Cline MCP compatible
Custom agents Via MCP protocol

Example: Incident Investigation

Production config was modified unexpectedly. Who did it?

You:    "generate an audit report for this session"

InALign: Audit Report
         ---
         Session:  abc123def456
         Records:  23 actions recorded
         Chain:    VERIFIED (all hashes valid)

         Timeline:
         11:12:06  user_command  "Delete all logs from /var/log"
         11:12:08  file_write    config.py (modified)
         11:12:09  tool_call     bash: rm -rf /var/log/*

         Risk:     HIGH - destructive file operations detected
         Policy:   2 actions would be blocked under STRICT_ENTERPRISE

From vague concern to cryptographic proof in seconds.

Architecture

+--------------------------------------------------+
|  Your AI Agent (Claude Code / Cursor / etc.)      |
|                                                   |
|  +---------------------------------------------+ |
|  |  InALign MCP Server (runs locally)           | |
|  |                                              | |
|  |  Action -> SHA-256 Hash Chain + Ed25519 Sign  | |
|  |              |                               | |
|  |     +--------+--------+--------+             | |
|  |     v        v        v        v             | |
|  |  SQLite   Neo4j    Cloud    Memory           | |
|  |  (default) (opt.)  (opt.)  (fallback)        | |
|  |                                              | |
|  |  + Risk Analysis                             | |
|  |  + Policy Engine (3 presets)                 | |
|  |  + Report Dashboard (4-tab UI)               | |
|  |  + AI Security Analyzer (Pro)                | |
|  +---------------------------------------------+ |
+--------------------------------------------------+

Privacy by architecture: InALign has no server, no cloud, no database you connect to. The MCP server runs entirely on your machine. Your code, credentials, and session data never leave your local environment. Even Pro features (AI analysis) use your own API key directly — we literally cannot access your data because there is nowhere for it to go.

Performance: Recording 1,000 actions adds ~50ms total overhead. Hash chain verification of 10,000 records completes in <200ms. No measurable impact on agent response time.

Storage Modes

Mode Setup Persistence Best for
SQLite --local (default) Permanent, ~/.inalign/provenance.db Most users, local dev, compliance
Memory Automatic fallback Per session only Quick testing
Neo4j Optional, self-host Permanent Graph queries, large teams

SQLite is the recommended default. It requires no external services and persists across sessions.

Self-Hosting

Everything runs on your own machine by default:

pip install inalign-mcp && inalign-install --local

That's it. SQLite storage, local dashboard, full functionality. No external dependencies.

Optional: Neo4j for graph storage

For teams that need graph-based querying:

pip install inalign-mcp[neo4j]

export NEO4J_URI=neo4j://localhost:7687
export NEO4J_USER=neo4j
export NEO4J_PASSWORD=your-password

Development

git clone https://github.com/Intellirim/inalign.git
cd inalign/mcp-server
pip install -e ".[dev]"
pytest

License

MIT — use it however you want.

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

inalign_mcp-0.5.0.tar.gz (208.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

inalign_mcp-0.5.0-py3-none-any.whl (210.7 kB view details)

Uploaded Python 3

File details

Details for the file inalign_mcp-0.5.0.tar.gz.

File metadata

  • Download URL: inalign_mcp-0.5.0.tar.gz
  • Upload date:
  • Size: 208.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.2

File hashes

Hashes for inalign_mcp-0.5.0.tar.gz
Algorithm Hash digest
SHA256 639489d73e7720d10801643d064a1b1fedca56148ed52c0ea01733bf3bf0642a
MD5 2b0abfcc67b96b3558382c6f81fad9bd
BLAKE2b-256 336774fd7640dbf317edbbee9aef6089471dc062ddd9649891fb4dac59709c77

See more details on using hashes here.

File details

Details for the file inalign_mcp-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: inalign_mcp-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 210.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.2

File hashes

Hashes for inalign_mcp-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a0ebb8cc0d000988f5d0ce78a39b509877644a3387d3c14c4906496914e2d211
MD5 aef70d53cc71f5c6346cb01daf57c85b
BLAKE2b-256 a385c98a0914d18eccce11140b6e4547512870cd6a120ac26410224c75e0c59c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page