A simple Python framework for creating AI agents with behavior tracking
Project description
๐ง ConnectOnion
A simple, elegant open-source framework for production-ready AI agents
๐ Philosophy: "Keep simple things simple, make complicated things possible"
This is the core principle that drives every design decision in ConnectOnion.
๐ฏ Living Our Philosophy
Step 1: Simple - Create and Use
from connectonion import Agent
agent = Agent(name="assistant")
agent.input("Hello!") # That's it!
Step 2: Add Your Tools
def search(query: str) -> str:
"""Search for information."""
return f"Results for {query}"
agent = Agent(name="assistant", tools=[search])
agent.input("Search for Python tutorials")
Step 3: Debug Your Agent
agent = Agent(name="assistant", tools=[search])
agent.auto_debug() # Interactive debugging session
Step 4: Production Ready
agent = Agent(
name="production",
model="gpt-5", # Latest models
tools=[search, analyze, execute], # Your functions as tools
system_prompt=company_prompt, # Custom behavior
max_iterations=10, # Safety controls
trust="prompt" # Multi-agent ready
)
agent.input("Complex production task")
Step 5: Multi-Agent - Make it Remotely Callable
from connectonion import host
host(agent) # HTTP server + P2P relay - other agents can now discover and call this agent
โจ Why ConnectOnion?
Most frameworks give you a way to call LLMs. ConnectOnion gives you everything around it โ so you only write prompt and tools.
Built-in AI Programmer
co ai # Opens a chat interface with an AI that deeply understands ConnectOnion
co ai is an AI coding assistant built with ConnectOnion. It writes working agent code because it knows the framework inside out. Fully open-source โ inspect it, modify it, build your own.
Built-in Frontend & Backend โ Just Write Prompt and Tools
Traditional path: write agent logic โ build FastAPI backend โ build React frontend โ wire APIs โ deploy.
ConnectOnion path: write prompt and tools โ deploy.
- Backend: framework handles the API layer
- Frontend: chat.openonion.ai โ ready-to-use chat interface
- All open-source, customizable, but you don't start from zero
Ready-to-Use Tool Ecosystem
Import and use โ no schema writing, no interface wiring:
from connectonion import bash, Shell # Command execution
from connectonion.useful_tools import FileTools # File system (with safety tracking)
from connectonion.useful_tools.browser_tools import BrowserAutomation # Natural language browser automation
from connectonion import Gmail, Outlook # Email
from connectonion import GoogleCalendar # Calendar
from connectonion import Memory # Persistent memory
from connectonion import TodoList # Task tracking
Need to customize? Copy the source into your project:
co copy Gmail # Copies Gmail tool source code to your project for modification
Built-in Approval System
Dangerous operations (bash commands, file deletion) automatically trigger approval โ no permission logic needed from you.
from connectonion.useful_plugins import tool_approval, shell_approval
agent = Agent("assistant", tools=[bash], plugins=[shell_approval])
# Shell commands now require approval before execution
Plugin-based: turn it off, customize it, or replace it entirely.
Skills System โ Auto-Discovery, Claude Code Compatible
Reusable workflows with automatic permission scoping:
from connectonion.useful_plugins import skills
agent = Agent("assistant", tools=[file_tools], plugins=[skills])
# User types /commit โ skill loads โ git commands auto-approved โ permission cleared after execution
Three-level auto-discovery (project โ user โ built-in):
.co/skills/skill-name/SKILL.md # Project-level (highest priority)
~/.co/skills/skill-name/SKILL.md # User-level
builtin/skill-name/SKILL.md # Built-in
Automatically loads Claude Code skills from .claude/skills/ โ no conversion needed.
12 Lifecycle Hooks + Plugin System
Inject logic at any point in the agent execution cycle:
from connectonion import Agent, after_tools, llm_do
from connectonion.useful_plugins import re_act, eval, auto_compact, subagents, ulw
# Built-in plugins โ same capabilities as Claude Code, open to any agent
agent = Agent("researcher", tools=[search], plugins=[
re_act, # Reflect + plan after each tool call
auto_compact, # Auto-compress context at 90% capacity
subagents, # Spawn sub-agents with independent tools and prompts
ulw, # Ultra Light Work โ fully autonomous mode
])
These plugins mirror Claude Code's internal capabilities โ auto_compact, subagents, ulw directly correspond to Claude Code's context compression, sub-agent spawning, and autonomous work mode. ConnectOnion makes these capabilities available to any agent you build.
Hooks: after_user_input, before_iteration, before_llm, after_llm, before_tools, before_each_tool, after_each_tool, after_tools, on_error, after_iteration, on_stop_signal, on_complete
Plugins are just lists of event handlers โ visible, modifiable, co copy-able.
Multi-Agent Trust System (Fast Rules)
When agents call each other, trust decisions happen before LLM involvement โ zero token cost for 90% of cases:
agent = Agent(
name="production",
trust="careful" # whitelist โ allow, unknown โ ask LLM, blocked โ deny
)
Three presets: open (dev), careful (staging), strict (production).
๐ฌ Join the Community
Get help, share agents, and discuss with 1000+ builders in our active community.
๐ Quick Start
Installation
pip install connectonion
Quickest Start - Use the CLI
# Create a new agent project with one command
co create my-agent
# Navigate and run
cd my-agent
python agent.py
The CLI guides you through API key setup automatically. No manual .env editing needed!
Manual Usage
import os
from connectonion import Agent
# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
# 1. Define tools as simple functions
def search(query: str) -> str:
"""Search for information."""
return f"Found information about {query}"
def calculate(expression: str) -> float:
"""Perform mathematical calculations."""
return eval(expression) # Use safely in production
# 2. Create an agent with tools and personality
agent = Agent(
name="my_assistant",
system_prompt="You are a helpful and friendly assistant.",
tools=[search, calculate]
# max_iterations=10 is the default - agent will try up to 10 tool calls per task
)
# 3. Use the agent
result = agent.input("What is 25 * 4?")
print(result) # Agent will use the calculate function
result = agent.input("Search for Python tutorials")
print(result) # Agent will use the search function
# 4. View behavior history (automatic!)
print(agent.history.summary())
๐ Interactive Debugging with @xray
Debug your agents like you debug code - pause at breakpoints, inspect variables, and test edge cases:
from connectonion import Agent
from connectonion.decorators import xray
# Mark tools you want to debug with @xray
@xray
def search_database(query: str) -> str:
"""Search for information."""
return f"Found 3 results for '{query}'"
@xray
def send_email(to: str, subject: str) -> str:
"""Send an email."""
return f"Email sent to {to}"
# Create agent with @xray tools
agent = Agent(
name="debug_demo",
tools=[search_database, send_email]
)
# Launch interactive debugging session
agent.auto_debug()
# Or debug a specific task
agent.auto_debug("Search for Python tutorials and email the results")
What happens at each @xray breakpoint:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
@xray BREAKPOINT: search_database
Local Variables:
query = "Python tutorials"
result = "Found 3 results for 'Python tutorials'"
What do you want to do?
โ Continue execution ๐ [c or Enter]
Edit values ๐ [e]
Quit debugging ๐ซ [q]
๐ก Use arrow keys to navigate or type shortcuts
>
Key features:
- Pause at breakpoints: Tools decorated with
@xraypause execution - Inspect state: See all local variables and execution context
- Edit variables: Modify results to test "what if" scenarios
- Full Python REPL: Run any code to explore agent behavior
- See next action: Preview what the LLM plans to do next
Perfect for:
- Understanding why agents make certain decisions
- Testing edge cases without modifying code
- Exploring agent behavior interactively
- Debugging complex multi-tool workflows
Learn more in the auto_debug guide
๐ Plugin System
Package reusable capabilities as plugins and use them across multiple agents:
from connectonion import Agent, after_tools, llm_do
# Define a reflection plugin
def add_reflection(agent):
trace = agent.current_session['trace'][-1]
if trace['type'] == 'tool_execution' and trace['status'] == 'success':
result = trace['result']
reflection = llm_do(
f"Result: {result[:200]}\n\nWhat did we learn?",
system_prompt="Be concise.",
temperature=0.3
)
agent.current_session['messages'].append({
'role': 'assistant',
'content': f"๐ค {reflection}"
})
# Plugin is just a list of event handlers
reflection = [after_tools(add_reflection)] # after_tools fires once after all tools
# Use across multiple agents
researcher = Agent("researcher", tools=[search], plugins=[reflection])
analyst = Agent("analyst", tools=[analyze], plugins=[reflection])
What plugins provide:
- Reusable capabilities: Package event handlers into bundles
- Simple pattern: A plugin is just a list of event handlers
- Easy composition: Combine multiple plugins together
- Built-in plugins: re_act, eval, system_reminder, image_result_formatter, and more
Built-in plugins are ready to use:
from connectonion.useful_plugins import re_act, system_reminder
agent = Agent("assistant", tools=[search], plugins=[re_act, system_reminder])
Learn more about plugins | Built-in plugins
๐ง Core Concepts
Agent
The main class that orchestrates LLM calls and tool usage. Each agent:
- Has a unique name for tracking purposes
- Can be given a custom personality via
system_prompt - Automatically converts functions to tools
- Records all behavior to JSON files
Function-Based Tools
NEW: Just write regular Python functions! ConnectOnion automatically converts them to tools:
def my_tool(param: str, optional_param: int = 10) -> str:
"""This docstring becomes the tool description."""
return f"Processed {param} with value {optional_param}"
# Use it directly - no wrapping needed!
agent = Agent("assistant", tools=[my_tool])
Key features:
- Automatic Schema Generation: Type hints become OpenAI function schemas
- Docstring Integration: First line becomes tool description
- Parameter Handling: Supports required and optional parameters
- Type Conversion: Handles different return types automatically
System Prompts
Define your agent's personality and behavior with flexible input options:
# 1. Direct string prompt
agent = Agent(
name="helpful_tutor",
system_prompt="You are an enthusiastic teacher who loves to educate.",
tools=[my_tools]
)
# 2. Load from file (any text file, no extension restrictions)
agent = Agent(
name="support_agent",
system_prompt="prompts/customer_support.md" # Automatically loads file content
)
# 3. Using Path object
from pathlib import Path
agent = Agent(
name="coder",
system_prompt=Path("prompts") / "senior_developer.txt"
)
# 4. None for default prompt
agent = Agent("basic_agent") # Uses default: "You are a helpful assistant..."
Example prompt file (prompts/customer_support.md):
# Customer Support Agent
You are a senior customer support specialist with expertise in:
- Empathetic communication
- Problem-solving
- Technical troubleshooting
## Guidelines
- Always acknowledge the customer's concern first
- Look for root causes, not just symptoms
- Provide clear, actionable solutions
Logging
Automatic logging of all agent activities including:
- User inputs and agent responses
- LLM calls with timing
- Tool executions with parameters and results
- Default storage in
.co/logs/{name}.log(human-readable format)
๐ฏ Example Tools
You can still use the traditional Tool class approach, but the new functional approach is much simpler:
Traditional Tool Classes (Still Supported)
from connectonion.tools import Calculator, CurrentTime, ReadFile
agent = Agent("assistant", tools=[Calculator(), CurrentTime(), ReadFile()])
New Function-Based Approach (Recommended)
def calculate(expression: str) -> float:
"""Perform mathematical calculations."""
return eval(expression) # Use safely in production
def get_time(format: str = "%Y-%m-%d %H:%M:%S") -> str:
"""Get current date and time."""
from datetime import datetime
return datetime.now().strftime(format)
def read_file(filepath: str) -> str:
"""Read contents of a text file."""
with open(filepath, 'r') as f:
return f.read()
# Use them directly!
agent = Agent("assistant", tools=[calculate, get_time, read_file])
The function-based approach is simpler, more Pythonic, and easier to test!
๐จ CLI Templates
ConnectOnion CLI provides templates to get you started quickly:
# Create a minimal agent (default)
co create my-agent
# Create with specific template
co create my-playwright-bot --template playwright
# Initialize in existing directory
co init # Adds .co folder only
co init --template playwright # Adds full template
Available Templates:
minimal(default) - Simple agent starterplaywright- Web automation with browser toolsmeta-agent- Development assistant with docs searchweb-research- Web research and data extraction
Each template includes:
- Pre-configured agent ready to run
- Automatic API key setup
- Embedded ConnectOnion documentation
- Git-ready
.gitignore
Learn more in the CLI Documentation and Templates Guide.
๐จ Creating Custom Tools
The simplest way is to use functions (recommended):
def weather(city: str) -> str:
"""Get current weather for a city."""
# Your weather API logic here
return f"Weather in {city}: Sunny, 22ยฐC"
# That's it! Use it directly
agent = Agent(name="weather_agent", tools=[weather])
Or use the Tool class for more control:
from connectonion.tools import Tool
class WeatherTool(Tool):
def __init__(self):
super().__init__(
name="weather",
description="Get current weather for a city"
)
def run(self, city: str) -> str:
return f"Weather in {city}: Sunny, 22ยฐC"
def get_parameters_schema(self):
return {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"}
},
"required": ["city"]
}
agent = Agent(name="weather_agent", tools=[WeatherTool()])
๐ Project Structure
connectonion/
โโโ connectonion/
โ โโโ __init__.py # Main exports
โ โโโ agent.py # Agent class
โ โโโ tools.py # Tool interface and built-ins
โ โโโ llm.py # LLM interface and OpenAI implementation
โ โโโ console.py # Terminal output and logging
โ โโโ cli/ # CLI module
โ โโโ main.py # CLI commands
โ โโโ docs.md # Embedded documentation
โ โโโ templates/ # Agent templates
โ โโโ basic_agent.py
โ โโโ chat_agent.py
โ โโโ data_agent.py
โ โโโ *.md # Prompt templates
โโโ docs/ # Documentation
โ โโโ quickstart.md
โ โโโ concepts/ # Core concepts
โ โโโ cli/ # CLI commands
โ โโโ templates/ # Project templates
โ โโโ ...
โโโ examples/
โ โโโ basic_example.py
โโโ tests/
โ โโโ test_agent.py
โโโ pyproject.toml
๐งช Running Tests
python -m pytest tests/
Or run individual test files:
python -m unittest tests.test_agent
๐ Automatic Logging
All agent activities are automatically logged to:
.co/logs/{agent_name}.log # Default location
Each log entry includes:
- Timestamp
- User input
- LLM calls with timing
- Tool executions with parameters and results
- Final responses
Control logging behavior:
# Default: logs to .co/logs/assistant.log
agent = Agent("assistant")
# Log to current directory
agent = Agent("assistant", log=True) # โ assistant.log
# Disable logging
agent = Agent("assistant", log=False)
# Custom log file
agent = Agent("assistant", log="my_logs/custom.log")
๐ Configuration
OpenAI API Key
Set your API key via environment variable:
export OPENAI_API_KEY="your-api-key-here"
Or pass directly to agent:
agent = Agent(name="test", api_key="your-api-key-here")
Model Selection
agent = Agent(name="test", model="gpt-5") # Default: gpt-5-mini
Iteration Control
Control how many tool calling iterations an agent can perform:
# Default: 10 iterations (good for most tasks)
agent = Agent(name="assistant", tools=[...])
# Complex tasks may need more iterations
research_agent = Agent(
name="researcher",
tools=[search, analyze, summarize, write_file],
max_iterations=25 # Allow more steps for complex workflows
)
# Simple agents can use fewer iterations for safety
calculator = Agent(
name="calc",
tools=[calculate],
max_iterations=5 # Prevent runaway calculations
)
# Per-request override for specific complex tasks
result = agent.input(
"Analyze all project files and generate comprehensive report",
max_iterations=50 # Override for this specific task
)
When an agent reaches its iteration limit, it returns:
"Task incomplete: Maximum iterations (10) reached."
Choosing the Right Limit:
- Simple tasks (1-3 tools): 5-10 iterations
- Standard workflows: 10-15 iterations (default: 10)
- Complex analysis: 20-30 iterations
- Research/multi-step: 30+ iterations
๐ ๏ธ Advanced Usage
Multiple Tool Calls
Agents can chain multiple tool calls automatically:
result = agent.input(
"Calculate 15 * 8, then tell me what time you did this calculation"
)
# Agent will use calculator first, then current_time tool
Custom LLM Providers
from connectonion.llm import LLM
class CustomLLM(LLM):
def complete(self, messages, tools=None):
# Your custom LLM implementation
pass
agent = Agent(name="test", llm=CustomLLM())
๐บ๏ธ Roadmap
Current Focus:
- Multi-agent networking (serve/connect)
- Trust system for agent collaboration
co deployfor one-command deployment
Recently Completed:
- Multiple LLM providers (OpenAI, Anthropic, Gemini, Groq, Grok, OpenRouter)
- Managed API keys (
co/prefix) - Plugin system
- Google OAuth integration
- Interactive debugging (
@xray,auto_debug)
See full roadmap for details.
๐ Connect With Us
- ๐ฌ Discord: Join our community - Get help, share ideas, meet other developers
- ๐ Documentation: docs.connectonion.com - Comprehensive guides and examples
- โญ GitHub: Star the repo - Show your support
- ๐ Issues: Report bugs - We respond quickly
โญ Show Your Support
If ConnectOnion helps you build better agents, give it a star! โญ
It helps others discover the framework and motivates us to keep improving it.
๐ค Contributing
We welcome contributions! ConnectOnion is open source and community-driven.
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
See our Contributing Guide for more details.
๐ License
MIT License - Use it anywhere, even commercially. See LICENSE file for details.
Built with โค๏ธ by the open-source community
โญ Star this repo โข ๐ฌ Join Discord โข ๐ Read Docs โข โฌ Back to top
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file connectonion-0.9.0.tar.gz.
File metadata
- Download URL: connectonion-0.9.0.tar.gz
- Upload date:
- Size: 1.3 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f68912ff2fa143cb73fef54e29622c97d32544e7963b87613bba5b0040a3de55
|
|
| MD5 |
c761ae7faae577b105cde3270f274fcf
|
|
| BLAKE2b-256 |
a71aca1cc95c7d1a0a9524a1713f4c0e8d9e454e3832f1361a1cc0f961c49b2e
|
File details
Details for the file connectonion-0.9.0-py3-none-any.whl.
File metadata
- Download URL: connectonion-0.9.0-py3-none-any.whl
- Upload date:
- Size: 1.2 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3fd0bec938681457ac72f20307ac9d42a7cb37bd8b99633761e366008bf58785
|
|
| MD5 |
ee60c60aadab88bb59ba4704de6f1007
|
|
| BLAKE2b-256 |
34377885d0a1d92ec493ed96dc28ffdb9ddad4096565ba9e11994f4d4ddd7ac4
|