Open-source Python port of Claude Code - an AI-powered CLI coding assistant
Project description
ย oh โ OpenHarness: Open Agent Harness
OpenHarness delivers core lightweight agent infrastructure: tool-use, skills, memory, and multi-agent coordination.
Join the community: contribute Harness for open agent development.
One Command (oh) to Launch OpenHarness and Unlock All Agent Harnesses.
Supports CLI agent integration including OpenClaw, nanobot, Cursor, and more.
โจ OpenHarness's Key Harness Features
๐ Agent Loopโข Streaming Tool-Call Cycle โข API Retry with Exponential Backoff โข Parallel Tool Execution โข Token Counting & Cost Tracking |
๐ง Harness Toolkitโข 43 Tools (File, Shell, Search, Web, MCP) โข On-Demand Skill Loading (.md) โข Plugin Ecosystem (Skills + Hooks + Agents) โข Compatible with anthropics/skills & plugins |
๐ง Context & Memoryโข CLAUDE.md Discovery & Injection โข Context Compression (Auto-Compact) โข MEMORY.md Persistent Memory โข Session Resume & History |
๐ก๏ธ Governanceโข Multi-Level Permission Modes โข Path-Level & Command Rules โข PreToolUse / PostToolUse Hooks โข Interactive Approval Dialogs |
๐ค Swarm Coordinationโข Subagent Spawning & Delegation โข Team Registry & Task Management โข Background Task Lifecycle โข ClawTeam Integration (Roadmap) |
๐ค What is an Agent Harness?
An Agent Harness is the complete infrastructure that wraps around an LLM to make it a functional agent. The model provides intelligence; the harness provides hands, eyes, memory, and safety boundaries.
OpenHarness is an open-source Python implementation designed for researchers, builders, and the community:
- Understand how production AI agents work under the hood
- Experiment with cutting-edge tools, skills, and agent coordination patterns
- Extend the harness with custom plugins, providers, and domain knowledge
- Build specialized agents on top of proven architecture
๐ฐ What's New
- 2026-04-01 ๐จ v0.1.0 โ Initial OpenHarness open-source release featuring complete Harness architecture:
Start here: Quick Start ยท Provider Compatibility ยท Showcase ยท Contributing ยท Changelog
๐ Quick Start
One-Click Install
The fastest way to get started โ a single command handles OS detection, dependency checks, and installation:
curl -fsSL https://raw.githubusercontent.com/HKUDS/OpenHarness/main/scripts/install.sh | bash
Options:
| Flag | Description |
|---|---|
--from-source |
Clone from GitHub and install in editable mode (pip install -e .) |
--with-channels |
Also install IM channel dependencies (slack-sdk, python-telegram-bot, discord.py) |
# Install from source (for contributors / latest code)
curl -fsSL https://raw.githubusercontent.com/HKUDS/OpenHarness/main/scripts/install.sh | bash -s -- --from-source
# Install with IM channel support
curl -fsSL https://raw.githubusercontent.com/HKUDS/OpenHarness/main/scripts/install.sh | bash -s -- --with-channels
# Or run locally after cloning
bash scripts/install.sh --from-source --with-channels
The script will:
- Detect your OS (Linux / macOS / WSL)
- Verify Python โฅ 3.10 and Node.js โฅ 18
- Install OpenHarness via
pip - Set up the React TUI (
npm install) if Node.js is available - Create
~/.openharness/config directory - Confirm with
oh --version
Prerequisites
- Python 3.10+ and uv
- Node.js 18+ (optional, for the React terminal UI)
- An LLM API key
One-Command Demo
ANTHROPIC_API_KEY=your_key uv run oh -p "Inspect this repository and list the top 3 refactors"
Install & Run
# Clone and install
git clone https://github.com/HKUDS/OpenHarness.git
cd OpenHarness
uv sync --extra dev
# Example: use Kimi as the backend
export ANTHROPIC_BASE_URL=https://api.moonshot.cn/anthropic
export ANTHROPIC_API_KEY=your_kimi_api_key
export ANTHROPIC_MODEL=kimi-k2.5
# Launch
oh # if venv is activated
uv run oh # without activating venv
Non-Interactive Mode (Pipes & Scripts)
# Single prompt โ stdout
oh -p "Explain this codebase"
# JSON output for programmatic use
oh -p "List all functions in main.py" --output-format json
# Stream JSON events in real-time
oh -p "Fix the bug" --output-format stream-json
๐ Provider Compatibility
OpenHarness supports three API formats: Anthropic (default), OpenAI-compatible (--api-format openai), and GitHub Copilot (--api-format copilot). The OpenAI format covers a wide range of providers.
Anthropic Format (default)
| Provider profile | Detection signal | Notes |
|---|---|---|
| Anthropic | Default when no custom ANTHROPIC_BASE_URL is set |
Default Claude-oriented setup |
| Moonshot / Kimi | ANTHROPIC_BASE_URL contains moonshot or model starts with kimi |
Anthropic-compatible endpoint |
| Vertex-compatible | Base URL contains vertex or aiplatform |
Anthropic-style gateways on Vertex |
| Bedrock-compatible | Base URL contains bedrock |
Bedrock-style deployments |
| Generic Anthropic-compatible | Any other explicit ANTHROPIC_BASE_URL |
Proxies and internal gateways |
OpenAI Format (--api-format openai)
Any provider implementing the OpenAI /v1/chat/completions API works out of the box:
| Provider | Base URL | Example models |
|---|---|---|
| Alibaba DashScope | https://dashscope.aliyuncs.com/compatible-mode/v1 |
qwen3.5-flash, qwen3-max, deepseek-r1 |
| DeepSeek | https://api.deepseek.com |
deepseek-chat, deepseek-reasoner |
| OpenAI | https://api.openai.com/v1 |
gpt-4o, gpt-4o-mini |
| GitHub Models | https://models.inference.ai.azure.com |
gpt-4o, Meta-Llama-3.1-405B-Instruct |
| SiliconFlow | https://api.siliconflow.cn/v1 |
deepseek-ai/DeepSeek-V3 |
| Groq | https://api.groq.com/openai/v1 |
llama-3.3-70b-versatile |
| Ollama (local) | http://localhost:11434/v1 |
Any local model |
# Example: use DashScope
uv run oh --api-format openai \
--base-url "https://dashscope.aliyuncs.com/compatible-mode/v1" \
--api-key "sk-xxx" \
--model "qwen3.5-flash"
# Or via environment variables
export OPENHARNESS_API_FORMAT=openai
export OPENAI_API_KEY=sk-xxx
export OPENHARNESS_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
export OPENHARNESS_MODEL=qwen3.5-flash
uv run oh
GitHub Copilot Format (--api-format copilot)
Use your existing GitHub Copilot subscription as the LLM backend. Authentication uses GitHub's OAuth device flow โ no API keys needed.
# One-time login (opens browser for GitHub authorization)
oh auth copilot-login
# Then launch with Copilot as the provider
uv run oh --api-format copilot
# Or via environment variable
export OPENHARNESS_API_FORMAT=copilot
uv run oh
# Check auth status
oh auth status
# Remove stored credentials
oh auth copilot-logout
| Feature | Details |
|---|---|
| Auth method | GitHub OAuth device flow (no API key needed) |
| Token management | Automatic refresh of short-lived session tokens |
| Enterprise | Supports GitHub Enterprise via --github-domain flag |
| Models | Uses Copilot's default model selection |
| API | OpenAI-compatible chat completions under the hood |
๐๏ธ Harness Architecture
OpenHarness implements the core Agent Harness pattern with 10 subsystems:
openharness/
engine/ # ๐ง Agent Loop โ query โ stream โ tool-call โ loop
tools/ # ๐ง 43 Tools โ file I/O, shell, search, web, MCP
skills/ # ๐ Knowledge โ on-demand skill loading (.md files)
plugins/ # ๐ Extensions โ commands, hooks, agents, MCP servers
permissions/ # ๐ก๏ธ Safety โ multi-level modes, path rules, command deny
hooks/ # โก Lifecycle โ PreToolUse/PostToolUse event hooks
commands/ # ๐ฌ 54 Commands โ /help, /commit, /plan, /resume, ...
mcp/ # ๐ MCP โ Model Context Protocol client
memory/ # ๐ง Memory โ persistent cross-session knowledge
tasks/ # ๐ Tasks โ background task management
coordinator/ # ๐ค Multi-Agent โ subagent spawning, team coordination
prompts/ # ๐ Context โ system prompt assembly, CLAUDE.md, skills
config/ # โ๏ธ Settings โ multi-layer config, migrations
ui/ # ๐ฅ๏ธ React TUI โ backend protocol + frontend
The Agent Loop
The heart of the harness. One loop, endlessly composable:
while True:
response = await api.stream(messages, tools)
if response.stop_reason != "tool_use":
break # Model is done
for tool_call in response.tool_uses:
# Permission check โ Hook โ Execute โ Hook โ Result
result = await harness.execute_tool(tool_call)
messages.append(tool_results)
# Loop continues โ model sees results, decides next action
The model decides what to do. The harness handles how โ safely, efficiently, with full observability.
Harness Flow
flowchart LR
U[User Prompt] --> C[CLI or React TUI]
C --> R[RuntimeBundle]
R --> Q[QueryEngine]
Q --> A[Anthropic-compatible API Client]
A -->|tool_use| T[Tool Registry]
T --> P[Permissions + Hooks]
P --> X[Files Shell Web MCP Tasks]
X --> Q
โจ Features
๐ง Tools (43+)
| Category | Tools | Description |
|---|---|---|
| File I/O | Bash, Read, Write, Edit, Glob, Grep | Core file operations with permission checks |
| Search | WebFetch, WebSearch, ToolSearch, LSP | Web and code search capabilities |
| Notebook | NotebookEdit | Jupyter notebook cell editing |
| Agent | Agent, SendMessage, TeamCreate/Delete | Subagent spawning and coordination |
| Task | TaskCreate/Get/List/Update/Stop/Output | Background task management |
| MCP | MCPTool, ListMcpResources, ReadMcpResource | Model Context Protocol integration |
| Mode | EnterPlanMode, ExitPlanMode, Worktree | Workflow mode switching |
| Schedule | CronCreate/List/Delete, RemoteTrigger | Scheduled and remote execution |
| Meta | Skill, Config, Brief, Sleep, AskUser | Knowledge loading, configuration, interaction |
Every tool has:
- Pydantic input validation โ structured, type-safe inputs
- Self-describing JSON Schema โ models understand tools automatically
- Permission integration โ checked before every execution
- Hook support โ PreToolUse/PostToolUse lifecycle events
๐ Skills System
Skills are on-demand knowledge โ loaded only when the model needs them:
Available Skills:
- commit: Create clean, well-structured git commits
- review: Review code for bugs, security issues, and quality
- debug: Diagnose and fix bugs systematically
- plan: Design an implementation plan before coding
- test: Write and run tests for code
- simplify: Refactor code to be simpler and more maintainable
- pdf: PDF processing with pypdf (from anthropics/skills)
- xlsx: Excel operations (from anthropics/skills)
- ... 40+ more
Compatible with anthropics/skills โ just copy .md files to ~/.openharness/skills/.
๐ Plugin System
Compatible with claude-code plugins. Tested with 12 official plugins:
| Plugin | Type | What it does |
|---|---|---|
commit-commands |
Commands | Git commit, push, PR workflows |
security-guidance |
Hooks | Security warnings on file edits |
hookify |
Commands + Agents | Create custom behavior hooks |
feature-dev |
Commands | Feature development workflow |
code-review |
Agents | Multi-agent PR review |
pr-review-toolkit |
Agents | Specialized PR review agents |
# Manage plugins
oh plugin list
oh plugin install <source>
oh plugin enable <name>
๐ค Ecosystem Workflows
OpenHarness is useful as a lightweight harness layer around Claude-style tooling conventions:
- OpenClaw-oriented workflows can reuse Markdown-first knowledge and command-driven collaboration patterns.
- Claude-style plugins and skills stay portable because OpenHarness keeps those formats familiar.
- ClawTeam-style multi-agent work maps well onto the built-in team, task, and background execution primitives.
For concrete usage ideas instead of generic claims, see docs/SHOWCASE.md.
๐ก๏ธ Permissions
Multi-level safety with fine-grained control:
| Mode | Behavior | Use Case |
|---|---|---|
| Default | Ask before write/execute | Daily development |
| Auto | Allow everything | Sandboxed environments |
| Plan Mode | Block all writes | Large refactors, review first |
Path-level rules in settings.json:
{
"permission": {
"mode": "default",
"path_rules": [{"pattern": "/etc/*", "allow": false}],
"denied_commands": ["rm -rf /", "DROP TABLE *"]
}
}
๐ฅ๏ธ Terminal UI
React/Ink TUI with full interactive experience:
- Command picker: Type
/โ arrow keys to select โ Enter - Permission dialog: Interactive y/n with tool details
- Mode switcher:
/permissionsโ select from list - Session resume:
/resumeโ pick from history - Animated spinner: Real-time feedback during tool execution
- Keyboard shortcuts: Shown at the bottom, context-aware
๐ก CLI
oh [OPTIONS] COMMAND [ARGS]
Session: -c/--continue, -r/--resume, -n/--name
Model: -m/--model, --effort, --max-turns
Output: -p/--print, --output-format text|json|stream-json
Permissions: --permission-mode, --dangerously-skip-permissions
Context: -s/--system-prompt, --append-system-prompt, --settings
Advanced: -d/--debug, --mcp-config, --bare
Subcommands: oh mcp | oh plugin | oh auth
๐ Test Results
| Suite | Tests | Status |
|---|---|---|
| Unit + Integration | 114 | โ All passing |
| CLI Flags E2E | 6 | โ Real model calls |
| Harness Features E2E | 9 | โ Retry, skills, parallel, permissions |
| React TUI E2E | 3 | โ Welcome, conversation, status |
| TUI Interactions E2E | 4 | โ Commands, permissions, shortcuts |
| Real Skills + Plugins | 12 | โ anthropics/skills + claude-code/plugins |
# Run all tests
uv run pytest -q # 114 unit/integration
python scripts/test_harness_features.py # Harness E2E
python scripts/test_real_skills_plugins.py # Real plugins E2E
๐ง Extending OpenHarness
Add a Custom Tool
from pydantic import BaseModel, Field
from openharness.tools.base import BaseTool, ToolExecutionContext, ToolResult
class MyToolInput(BaseModel):
query: str = Field(description="Search query")
class MyTool(BaseTool):
name = "my_tool"
description = "Does something useful"
input_model = MyToolInput
async def execute(self, arguments: MyToolInput, context: ToolExecutionContext) -> ToolResult:
return ToolResult(output=f"Result for: {arguments.query}")
Add a Custom Skill
Create ~/.openharness/skills/my-skill.md:
---
name: my-skill
description: Expert guidance for my specific domain
---
# My Skill
## When to use
Use when the user asks about [your domain].
## Workflow
1. Step one
2. Step two
...
Add a Plugin
Create .openharness/plugins/my-plugin/.claude-plugin/plugin.json:
{
"name": "my-plugin",
"version": "1.0.0",
"description": "My custom plugin"
}
Add commands in commands/*.md, hooks in hooks/hooks.json, agents in agents/*.md.
๐ Showcase
OpenHarness is most useful when treated as a small, inspectable harness you can adapt to a real workflow:
- Repo coding assistant for reading code, patching files, and running checks locally.
- Headless scripting tool for
jsonandstream-jsonoutput in automation flows. - Plugin and skill testbed for experimenting with Claude-style extensions.
- Multi-agent prototype harness for task delegation and background execution.
- Provider comparison sandbox across Anthropic-compatible backends.
See docs/SHOWCASE.md for short, reproducible examples.
๐ค Contributing
OpenHarness is a community-driven research project. We welcome contributions in:
| Area | Examples |
|---|---|
| Tools | New tool implementations for specific domains |
| Skills | Domain knowledge .md files (finance, science, DevOps...) |
| Plugins | Workflow plugins with commands, hooks, agents |
| Providers | Support for more LLM backends (OpenAI, Ollama, etc.) |
| Multi-Agent | Coordination protocols, team patterns |
| Testing | E2E scenarios, edge cases, benchmarks |
| Documentation | Architecture guides, tutorials, translations |
# Development setup
git clone https://github.com/HKUDS/OpenHarness.git
cd OpenHarness
uv sync --extra dev
uv run pytest -q # Verify everything works
Useful contributor entry points:
CONTRIBUTING.mdfor setup, checks, and PR expectationsCHANGELOG.mdfor user-visible changesdocs/SHOWCASE.mdfor real-world usage patterns worth documenting
๐ License
MIT โ see LICENSE.
Oh my Harness!
The model is the agent. The code is the harness.
Thanks for visiting โจ OpenHarness!
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file openharness_ai-0.1.2.tar.gz.
File metadata
- Download URL: openharness_ai-0.1.2.tar.gz
- Upload date:
- Size: 8.2 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a43b45f01f1c29fd312e9477e111d901e4c2d695bb198e9cf13e4e955597baa1
|
|
| MD5 |
7265033d02ecccbfc80d77e9bee0558b
|
|
| BLAKE2b-256 |
43532c4a825e530e90eda21013016e522e980de4b651565fee76b9624b819dbe
|
File details
Details for the file openharness_ai-0.1.2-py3-none-any.whl.
File metadata
- Download URL: openharness_ai-0.1.2-py3-none-any.whl
- Upload date:
- Size: 397.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
287f643da1b7af5c7a6fe4702050c5f1d8e7f467d695458a294e8145a1d32d45
|
|
| MD5 |
d16a6b94549ccf9dd59be6af61b36074
|
|
| BLAKE2b-256 |
3385ddd7d5592d1b992688fca144deb6619449949f9befc1d52f61b08d90ac09
|