Hardware-isolated Linux sandbox for AI agents — Firecracker MicroVM + MCP
Project description
BunkerVM
Docker Desktop for AI sandboxing.
Double-click to install. Hardware-isolated VMs for your AI agents.
Your AI agent can run rm -rf /. Let it — inside a bunker.
BunkerDesktop — One-Click Sandbox Manager
BunkerDesktop is the easiest way to run BunkerVM. Download the installer, double-click, done.
- Native Windows app — no browser, no terminal, no Docker
- Automatic WSL2 + backend setup — the installer handles everything
- Dashboard — create, monitor, and destroy sandboxes with a click
- Live logs — filter by sandbox, log level, auto-scroll
- Start on login — engine runs in the background, always ready
Install
- Download BunkerDesktopSetup.exe from Releases
- Run the installer — it sets up WSL2, installs the backend, creates shortcuts
- Launch BunkerDesktop from your desktop
That's it. No pip install, no WSL commands, no config files.
VS Code + Copilot users: BunkerDesktop runs the engine in the background. Once it's running, Copilot Chat automatically connects to it — every tool call runs in a hardware-isolated VM.
For Developers — pip install bunkervm
If you prefer the command line or want to integrate BunkerVM into your own code:
pip install bunkervm
from bunkervm import run_code
result = run_code("print('Hello from a microVM!')")
print(result) # Hello from a microVM!
One function. VM boots (~3s), code runs, VM dies. Your host was never touched.
See bunkervm demo output
╔══════════════════════════════════════╗
║ BunkerVM Demo ║
║ Hardware-isolated AI sandbox ║
╚══════════════════════════════════════╝
Starting BunkerVM...
Launching Firecracker microVM...
Running code inside sandbox...
OS: Linux-6.1.102-x86_64-with
Hostname: bunkervm
Python: 3.12.12
Prime numbers under 100:
2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97
✓ Code ran safely inside a Firecracker microVM
✓ Full Linux environment (not a container)
✓ Hardware-level isolation via KVM
✓ VM will be destroyed after this demo
Done. ✓ Demo completed in 3.6s
The Problem
AI agents generate and execute code on your machine. One bad LLM output and your files, credentials, or entire system could be gone. Docker shares the kernel — container escapes are real. You need hardware isolation.
The fix: BunkerVM boots a Firecracker microVM in ~3 seconds, runs the code inside a throwaway Linux sandbox with its own kernel, and destroys everything after.
Why Not Docker?
| BunkerDesktop | BunkerVM (CLI) | Docker | |
|---|---|---|---|
| Setup | Double-click installer | pip install bunkervm |
Dockerfile + build + run |
| Isolation | Hardware (KVM) | Hardware (KVM) | Shared kernel |
| Escape risk | Near zero | Near zero | Container escapes exist |
| Boot time | ~3s | ~3s | ~0.5s |
| Dashboard | Built-in GUI | Web dashboard | Docker Desktop |
| Target user | Everyone | Developers | DevOps |
Framework Integrations
Every integration auto-boots a Firecracker VM and exposes 6 sandboxed tools — run_command, write_file, read_file, list_directory, upload_file, download_file.
All toolkits inherit from BunkerVMToolsBase — identical behaviour regardless of framework.
LangChain / LangGraph
pip install bunkervm[langgraph] langchain-openai
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
from bunkervm.langchain import BunkerVMToolkit
with BunkerVMToolkit() as toolkit: # boots VM (~3s)
agent = create_agent(
ChatOpenAI(model="gpt-4o"),
tools=toolkit.get_tools(), # 6 sandbox tools
)
agent.invoke({"messages": [("user", "Find primes under 100")]})
# VM auto-destroyed
Agent execution output
⏳ Booting sandbox VM... ✅ Sandbox ready
→ write_file: /tmp/primes.py (312 bytes)
→ run_command: python3 /tmp/primes.py ← OK (42ms)
🤖 [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47,
53, 59, 61, 67, 71, 73, 79, 83, 89, 97]
🧹 Sandbox destroyed.
OpenAI Agents SDK
pip install bunkervm[openai-agents]
from agents import Agent, Runner
from bunkervm.openai_agents import BunkerVMTools
tools = BunkerVMTools() # boots VM (~3s)
agent = Agent(
name="coder",
instructions="You write and run code inside a secure VM.",
tools=tools.get_tools(), # 6 sandbox tools
)
result = Runner.run_sync(agent, "First 20 Fibonacci numbers")
print(result.final_output)
tools.stop()
Agent execution output
⏳ Booting sandbox VM... ✅ Sandbox ready
→ write_file: /tmp/fib.py (198 bytes)
→ run_command: python3 /tmp/fib.py ← OK (38ms)
🤖 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377,
610, 987, 1597, 2584, 4181
🧹 Sandbox destroyed.
CrewAI
pip install bunkervm[crewai]
from crewai import Agent, Task, Crew
from bunkervm.crewai import BunkerVMCrewTools
tools = BunkerVMCrewTools() # boots VM (~3s)
coder = Agent(
role="Software Engineer",
goal="Write and test code inside a secure sandbox",
tools=tools.get_tools(), # 6 sandbox tools
)
task = Task(description="Bubble sort a random list", agent=coder,
expected_output="The sorted list")
Crew(agents=[coder], tasks=[task]).kickoff()
tools.stop()
Agent execution output
⏳ Booting sandbox VM... ✅ Sandbox ready
🔧 write_file → /tmp/sort.py ✅ 403 bytes
🔧 run_command → python3 /tmp/sort.py
Original: [83, 11, 25, 19, 86, 52, 97, 5, 70, 69]
Sorted: [5, 11, 19, 25, 52, 69, 70, 83, 86, 97]
🧹 Sandbox destroyed.
Install all integrations
pip install bunkervm[all] # LangChain + OpenAI Agents SDK + CrewAI
Full working examples:
examples/
VS Code + Copilot
Every line of code Copilot runs — hardware-isolated.
Option A: BunkerDesktop (recommended)
Just install BunkerDesktop and it works. The engine runs in the background and VS Code auto-connects.
Option B: Manual setup (2 commands)
pip install bunkervm
bunkervm vscode-setup
That's it. Reload VS Code (Ctrl+Shift+P → "Reload Window"). Copilot Chat now has 8 sandboxed tools.
Windows users: These commands run in your normal PowerShell terminal.
vscode-setupauto-detects Windows, creates an isolated Python environment inside WSL, installs BunkerVM there, and generates the correct config. You don't need to touch WSL directly.
Enable internet inside the VM (optional)
bunkervm enable-network
On Windows this auto-proxies into WSL and prompts for your Linux password.
On Linux, prefix with sudo.
How it works
bunkervm vscode-setupgenerates.vscode/mcp.json— auto-detects your OS- On Windows: creates
~/.bunkervm/venvinside WSL, installs BunkerVM there automatically - VS Code starts BunkerVM as an MCP server (via WSL on Windows, directly on Linux)
- A Firecracker microVM boots (~3s) with its own Linux kernel
- Copilot Chat gets 8 tools:
sandbox_exec,sandbox_write_file,sandbox_read_file,sandbox_list_dir,sandbox_upload_file,sandbox_download_file,sandbox_status,sandbox_reset - When Copilot writes code → it runs inside the VM → your host is never touched
Try it
Open Copilot Chat and ask:
- "Write a Python script that finds primes under 1000, save it, and run it in the sandbox"
- "Fetch the top 3 Hacker News posts in the sandbox"
- "Run
uname -ain the sandbox to show me the VM's kernel"
Demo
What bunkervm vscode-setup generates
Linux:
{
"servers": {
"bunkervm": {
"command": "/usr/local/bin/bunkervm",
"args": ["server"]
}
}
}
Windows (auto-detected — installs in WSL venv automatically):
{
"servers": {
"bunkervm": {
"command": "wsl",
"args": ["-d", "Ubuntu", "--", "/home/you/.bunkervm/venv/bin/bunkervm", "server"]
}
}
}
More Features
Reusable Sandbox — Keep the VM alive for multiple runs
from bunkervm import Sandbox
with Sandbox() as sb:
sb.run("x = 42")
sb.run("y = x * 2")
result = sb.run("print(f'{x} * 2 = {y}')")
print(result) # 42 * 2 = 84
State persists between run() calls — variables, imports, everything stays.
Secure AI Agent — One-line agent sandboxing
from bunkervm import secure_agent
runtime = secure_agent()
result = runtime.run("print('Sandboxed!')")
print(result)
runtime.stop()
Claude Desktop (MCP)
Add to claude_desktop_config.json:
{
"mcpServers": {
"bunkervm": {
"command": "bunkervm",
"args": ["server"]
}
}
}
Windows (WSL2):
{
"mcpServers": {
"bunkervm": {
"command": "wsl",
"args": ["-d", "Ubuntu", "--", "bunkervm", "server"]
}
}
}
Multi-VM Support — Run multiple sandboxes simultaneously
from bunkervm import VMPool
pool = VMPool(max_vms=5)
pool.start("agent-1", cpus=2, memory=1024)
pool.start("agent-2", cpus=1, memory=512)
pool.client("agent-1").exec("echo 'I am agent 1'")
pool.client("agent-2").exec("echo 'I am agent 2'")
pool.stop_all()
Web Dashboard
bunkervm server --transport sse --dashboard
# Dashboard at http://localhost:3001/dashboard
Real-time monitoring: VM status, CPU, memory, live audit log, and reset controls.
MCP Tools — 8 tools exposed via MCP server
| Tool | Description |
|---|---|
sandbox_exec |
Run any shell command |
sandbox_write_file |
Create or edit files |
sandbox_read_file |
Read files |
sandbox_list_dir |
Browse directories |
sandbox_upload_file |
Upload files host → VM |
sandbox_download_file |
Download files VM → host |
sandbox_status |
Check VM health, CPU, RAM |
sandbox_reset |
Wipe sandbox, start fresh |
CLI Reference
bunkervm demo # See it in action
bunkervm run script.py # Run a script in a sandbox
bunkervm run -c "print(42)" # Run inline code
bunkervm server --transport sse # Start MCP server
bunkervm info # Check system readiness
bunkervm vscode-setup # Set up VS Code MCP integration
bunkervm enable-network # One-time: enable VM networking (needs sudo)
Options:
--cpus N vCPUs (default: 1 for run, 2 for server)
--memory MB RAM in MB (default: 512 for run, 2048 for server)
--no-network Disable internet inside VM
--timeout SECS Execution timeout (default: 30)
--dashboard Enable web dashboard (server mode)
How It Works
Your AI Agent
│
▼
bunkervm ──vsock──▶ Firecracker MicroVM
(host) ┌──────────────────┐
│ Alpine Linux │
│ Python 3.12 │
│ Full toolchain │
│ exec_agent │
└──────────────────┘
Hardware isolation (KVM)
Destroyed after use
- Firecracker — Amazon's micro-VM engine (powers AWS Lambda)
- vsock — Zero-config host↔VM communication
- ~100MB bundle — Firecracker + kernel + rootfs, auto-downloaded on first run
Install
Desktop Users (Windows)
Download BunkerDesktopSetup.exe from Releases — everything is automatic.
Developers
pip install bunkervm # Core
pip install bunkervm[langgraph] # + LangGraph/LangChain
pip install bunkervm[openai-agents] # + OpenAI Agents SDK
pip install bunkervm[crewai] # + CrewAI
pip install bunkervm[all] # Everything
Requirements: Linux with KVM, or Windows WSL2 with nested virtualization. Python 3.10+.
Need
/dev/kvmaccess? Runbunkervm infoto diagnose, orsudo usermod -aG kvm $USERthen re-login.
WSL2 Setup (Windows)
Add to %USERPROFILE%\.wslconfig:
[wsl2]
nestedVirtualization=true
Then restart WSL: wsl --shutdown
Troubleshooting
| Problem | Solution |
|---|---|
bunkervm: command not found with sudo |
sudo $(which bunkervm) demo or add user to kvm group |
/dev/kvm not found |
sudo modprobe kvm or enable nested virtualization in WSL2 |
Permission denied: /dev/kvm |
sudo usermod -aG kvm $USER then re-login |
| Bundle download fails | Download from Releases → ~/.bunkervm/bundle/ |
| VM fails to start | bunkervm info — diagnoses all prerequisites |
Building from source
git clone https://github.com/ashishgituser/bunkervm.git
cd bunkervm
sudo bash build/setup-firecracker.sh
sudo bash build/build-sandbox-rootfs.sh
pip install -e ".[dev]"
bunkervm demo
License
AGPL-3.0 — Free for personal and open-source use.
If BunkerVM helps you ship safer agents, give it a star ⭐
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bunkervm-0.8.3.tar.gz.
File metadata
- Download URL: bunkervm-0.8.3.tar.gz
- Upload date:
- Size: 134.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eaacc6433de19c07758768876e10ee64fd7b2edf53e20bceaff6be8762cf29ac
|
|
| MD5 |
5ae57758e5c92a72f88478192b75ddba
|
|
| BLAKE2b-256 |
7110662dc9c396eeaa672391d556d2f5fc2309c581e553c74427fb1c7b193f42
|
File details
Details for the file bunkervm-0.8.3-py3-none-any.whl.
File metadata
- Download URL: bunkervm-0.8.3-py3-none-any.whl
- Upload date:
- Size: 122.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c45c4a9308729af2b7f13e02a9995b55fc42090f29edf3765143fce01e9f005f
|
|
| MD5 |
ab90e88f009f766738b4002c91917aa7
|
|
| BLAKE2b-256 |
368ea765d3301b02d589a677382494ea9dcd0082a3a2c26ba1083df04a6e1c8f
|