Skip to main content

Open-source MCP security, aggregation, and monitoring. Single-user, self-hosted MCP proxy.

Project description

OpenEdison ๐Ÿ”’โšก๏ธ

Deterministic Agentic Data Firewall

Screenshot 2025-10-16 at 17 56 39

Agentic AI breaks traditional data security. OpenEdison secures & unifies agent data access to stop data leaks by securing your agent's interactions with your data/software.

Gain visibility, monitor potential threats, and get alerts on the data your agent is reading/writing.

How is it different from other MCP Gateways? Read our MCP Gateway Comparison Blog and our OpenEdison release post.

OpenEdison helps address the lethal trifecta problem, which can increase risks of agent hijacking & data exfiltration by malicious actors.

Join our Discord for feedback, feature requests, and to discuss MCP security for your use case: discord.gg/tXjATaKgTV

๐Ÿ“ง To get visibility, control and exfiltration blocker into AI's interaction with your company software, systems of record, DBs, Contact us to discuss.

Join our Discord Project Version Python Version License


Features โœจ

  • ๐Ÿ›‘ Data leak monitoring - Edison detects and blocks potential data leaks through configurable security controls
  • ๐Ÿ•ฐ๏ธ Controlled execution - Provides structured execution controls to reduce data exfiltration risks.
  • ๐Ÿ—‚๏ธ Easily configurable - Easy to configure and manage your MCP servers
  • ๐Ÿ“Š Visibility into agent interactions - Track and monitor your agents and their interactions with connected software/data via MCP calls
  • ๐Ÿ”— Simple API - REST API for managing MCP servers and proxying requests
  • ๐Ÿณ Docker support - Run in a container for easy deployment
๐Ÿค Quick integration with LangGraph and other agent frameworks

Open-Edison integrates with LangGraph, LangChain, and plain Python agents by decorating your tools/functions with @edison.track(). This provides immediate observability and policy enforcement without invasive changes.

๐Ÿ”Ž Dataflow observability (LangGraph demo)

Open Edison dataflow observability while running the LangGraph long_running_toolstorm_agent.py demo

โšก๏ธ One-line tool integration

Just add @edison.track() to your tools/functions to enable Open-Edison controls and observability.

Adding @edison.track() to an existing agent or tool

Read more in docs/langgraph_quickstart.md

About Edison.watch ๐Ÿข

Edison helps you gain observability, control, and policy enforcement for AI interactions with systems of records, existing company software and data. Reduce risks of AI-caused data leakage with streamlined setup for cross-system governance.

OpenEdison vs EdisonWatch - EdisonWatch adds Multi-Tenancy, SIEM, SSO, and Auto-Enforcement
Feature OpenEdison (Open Source) EdisonWatch (Commercial)
Single User โœ… โœ…
MCP Security Controls โœ… โœ…
Lethal Trifecta Detection โœ… โœ…
Tool/Resource Permissions โœ… โœ…
Multi-Tenancy โŒ โœ…
SIEM Integration โŒ โœ…
SSO (Single Sign-On) โŒ โœ…
Client Software for Auto-Enforcement โŒ โœ…

๐Ÿ‘‰ Interested in EdisonWatch? Visit edison.watch or contact us.

Quick Start ๐Ÿš€

The fastest way to get started:

# Installs uv (via Astral installer) and launches open-edison with uvx.
# Note: This does NOT install Node/npx. Install Node if you plan to use npx-based tools like mcp-remote.
curl -fsSL https://raw.githubusercontent.com/Edison-Watch/open-edison/main/curl_pipe_bash.sh | bash

Run locally with uvx: uvx open-edison That will run the setup wizard if necessary.

โฌ‡๏ธ Install Node.js/npm (optional for MCP tools)

If you need npx (for Node-based MCP tools like mcp-remote), install Node.js as well:

macOS

  • uv: curl -fsSL https://astral.sh/uv/install.sh | sh
  • Node/npx: brew install node

Linux

  • uv: curl -fsSL https://astral.sh/uv/install.sh | sh
  • Node/npx: sudo apt-get update && sudo apt-get install -y nodejs npm

Windows

  • uv: powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
  • Node/npx: winget install -e --id OpenJS.NodeJS

After installation, ensure that npx is available on PATH.

PyPI Install from PyPI

Prerequisites

  • Pipx/uvx
# Using uvx
uvx open-edison

# Using pipx
pipx install open-edison
open-edison

Run with a custom config directory:

open-edison run --config-dir ~/edison-config
# or via environment variable
OPEN_EDISON_CONFIG_DIR=~/edison-config open-edison run
Docker Run with Docker

There is a dockerfile for simple local setup.

# Single-line:
git clone https://github.com/Edison-Watch/open-edison.git && cd open-edison && make docker_run

# Or
# Clone repo
git clone https://github.com/Edison-Watch/open-edison.git
# Enter repo
cd open-edison
# Build and run
make docker_run

The MCP server will be available at http://localhost:3000 and the api + frontend at http://localhost:3001. ๐ŸŒ

โš™๏ธ Run from source
  1. Clone the repository:
git clone https://github.com/Edison-Watch/open-edison.git
cd open-edison
  1. Set up the project:
make setup
  1. Edit config.json to configure your MCP servers. See the full file: config.json, it looks like:
{
  "server": { "host": "0.0.0.0", "port": 3000, "api_key": "..." },
  "logging": { "level": "INFO"},
  "mcp_servers": [
    { "name": "filesystem", "command": "uvx", "args": ["mcp-server-filesystem", "/tmp"], "enabled": true },
    { "name": "github", "enabled": false, "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "..." } }
  ]
}
  1. Run the server:
make run
# or, from the installed package
open-edison run

The server will be available at http://localhost:3000. ๐ŸŒ

๐Ÿ”Œ MCP Connection

Connect any MCP client to Open Edison (requires Node.js/npm for npx):

npx -y mcp-remote http://localhost:3000/mcp/ --http-only --header "Authorization: Bearer your-api-key"

Or add to your MCP client config:

{
  "mcpServers": {
    "open-edison": {
      "command": "npx",
      "args": ["-y", "mcp-remote", "http://localhost:3000/mcp/", "--http-only", "--header", "Authorization: Bearer your-api-key"]
    }
  }
}
๐Ÿค– Connect to ChatGPT (Plus/Pro)

Open-Edison comes preconfigured with ngrok for easy ChatGPT integration. Follow these steps to connect:

1. Set up ngrok Account

  1. Visit https://dashboard.ngrok.com to sign up for a free account
  2. Get your authtoken from the "Your Authtoken" page
  3. Create a domain name in the "Domains" page
  4. Set these values in your ngrok.yml file:
version: 3

agent:
  authtoken: YOUR_NGROK_AUTH_TOKEN

endpoints:
  - name: open-edison-mcp
    url: https://YOUR_DOMAIN.ngrok-free.app
    upstream:
      url: http://localhost:3000
      protocol: http1

2. Start ngrok Tunnel

make ngrok-start

This will start the ngrok tunnel and make Open-Edison accessible via your custom domain.

3. Enable Developer Mode in ChatGPT

  1. Click on your profile icon in ChatGPT
  2. Select Settings
  3. Go to "Connectors" in the settings menu
  4. Select "Advanced Settings"
  5. Enable "Developer Mode (beta)"

4. Add Open-Edison to ChatGPT

  1. Click on your profile icon in ChatGPT
  2. Select Settings
  3. Go to "Connectors" in the settings menu
  4. Select "Create" next to "Browse connections"
  5. Set a name (e.g., "Open-Edison")
  6. Use your ngrok URL as the MCP Server URL (e.g., https://your-domain.ngrok-free.app/mcp/)
  7. Select "No authentication" in the Authentication menu
  8. Tick the "I trust this application" checkbox
  9. Press Create

5. Use Open-Edison in ChatGPT

Every time you start a new chat:

  1. Click on the plus sign in the prompt text box ("Ask anything")
  2. Hover over "... More"
  3. Click on "Developer Mode"
  4. "Developer Mode" and your connector name (e.g., "Open-Edison") will appear at the bottom of the prompt textbox

You can now use Open-Edison's MCP tools directly in your ChatGPT conversations! Do not forget to repeat step 5 everytime you start a new chat.

๐Ÿงญ Usage

API Endpoints

See API Reference for full API documentation.

๐Ÿ› ๏ธ Development

Setup ๐Ÿงฐ

Setup from source as above.

Run โ–ถ๏ธ

Server doesn't have any auto-reload at the moment, so you'll need to run & ctrl-c this during development.

make run

Tests/code quality โœ…

We expect make ci to return cleanly.

make ci
โš™๏ธ Configuration (config.json)

Configuration โš™๏ธ

The config.json file contains all configuration:

  • server.host - Server host (default: localhost)
  • server.port - Server port (default: 3000)
  • server.api_key - API key for authentication
  • logging.level - Log level (DEBUG, INFO, WARNING, ERROR)
  • mcp_servers - Array of MCP server configurations

Each MCP server configuration includes:

  • name - Unique name for the server
  • command - Command to run the MCP server
  • args - Arguments for the command
  • env - Environment variables (optional)
  • enabled - Whether to auto-start this server

๐Ÿ” How Edison reduces data leakages

๐Ÿ”ฑ The lethal trifecta, agent lifecycle management

Open Edison includes a comprehensive security monitoring system that tracks the "lethal trifecta" of AI agent risks, as described in Simon Willison's blog post:

The lethal trifecta diagram showing the three key AI agent security risks
  1. Private data access - Access to sensitive local files/data
  2. Untrusted content exposure - Exposure to external/web content
  3. External communication - Ability to write/send data externally
Privileged Access Management (PAM) example showing the lethal trifecta in action

The configuration allows you to classify these risks across tools, resources, and prompts using separate configuration files.

In addition to trifecta, we track Access Control Level (ACL) for each tool call, that is, each tool has an ACL level (one of PUBLIC, PRIVATE, or SECRET), and we track the highest ACL level for each session. If a write operation is attempted to a lower ACL level, it can be blocked based on your configuration.

๐Ÿงฐ Tool Permissions (tool_permissions.json)

Defines security classifications for MCP tools. See full file: tool_permissions.json, it looks like:

{
  "_metadata": { "last_updated": "2025-08-07" },
  "builtin": {
    "get_security_status": { "enabled": true, "write_operation": false, "read_private_data": false, "read_untrusted_public_data": false, "acl": "PUBLIC" }
  },
  "filesystem": {
    "read_file": { "enabled": true, "write_operation": false, "read_private_data": true, "read_untrusted_public_data": false, "acl": "PRIVATE" },
    "write_file": { "enabled": true, "write_operation": true, "read_private_data": true, "read_untrusted_public_data": false, "acl": "PRIVATE" }
  }
}
๐Ÿ“ Resource Permissions (`resource_permissions.json`)

Resource Permissions (resource_permissions.json)

Defines security classifications for resource access patterns. See full file: resource_permissions.json, it looks like:

{
  "_metadata": { "last_updated": "2025-08-07" },
  "builtin": { "config://app": { "enabled": true, "write_operation": false, "read_private_data": false, "read_untrusted_public_data": false } }
}
๐Ÿ’ฌ Prompt Permissions (`prompt_permissions.json`)

Prompt Permissions (prompt_permissions.json)

Defines security classifications for prompt types. See full file: prompt_permissions.json, it looks like:

{
  "_metadata": { "last_updated": "2025-08-07" },
  "builtin": { "summarize_text": { "enabled": true, "write_operation": false, "read_private_data": false, "read_untrusted_public_data": false } }
}

Wildcard Patterns โœจ

All permission types support wildcard patterns:

  • Tools: server_name/* (e.g., filesystem/* matches all filesystem tools)
  • Resources: scheme:* (e.g., file:* matches all file resources)
  • Prompts: type:* (e.g., template:* matches all template prompts)

Security Monitoring ๐Ÿ•ต๏ธ

All items must be explicitly configured - unknown tools/resources/prompts will be rejected for security.

Use the get_security_status tool to monitor your session's current risk level and see which capabilities have been accessed. When the lethal trifecta is achieved (all three risk flags set), further potentially dangerous operations are blocked.

Documentation ๐Ÿ“š

๐Ÿ“š Complete documentation available in docs/

๐Ÿ“„ License

GPL-3.0 License - see LICENSE for details.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

open_edison-0.1.133rc1.tar.gz (415.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

open_edison-0.1.133rc1-py3-none-any.whl (347.0 kB view details)

Uploaded Python 3

File details

Details for the file open_edison-0.1.133rc1.tar.gz.

File metadata

  • Download URL: open_edison-0.1.133rc1.tar.gz
  • Upload date:
  • Size: 415.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for open_edison-0.1.133rc1.tar.gz
Algorithm Hash digest
SHA256 92e4421c661fbbc7ef433524d706b2e3accebdf27cdd26045518aa4839c3b32f
MD5 0c3517694706b20a77b94e28ada73898
BLAKE2b-256 ba3830ab50c9c8f7d4b5f41b20e32e2afaa62fadc2b57c943d9b00b26d579225

See more details on using hashes here.

File details

Details for the file open_edison-0.1.133rc1-py3-none-any.whl.

File metadata

File hashes

Hashes for open_edison-0.1.133rc1-py3-none-any.whl
Algorithm Hash digest
SHA256 75d1ce2265d4b84bb3ed7d419a3c927f27b26f3d7127ed26ba0f1de3dc00a6ef
MD5 bdbdc4ff6f33e34248dd88d4074cd4cf
BLAKE2b-256 f36a36f971f8e7eb434b4c68a5f68d49143e742acabe6090c7391342f9e816d9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page