Skip to main content

A simple Python framework for creating AI agents with behavior tracking

Project description

๐Ÿง… ConnectOnion

Production Ready License: MIT Python 3.8+ Discord Documentation

A simple, elegant open-source framework for production-ready AI agents

๐Ÿ“š Documentation โ€ข ๐Ÿ’ฌ Discord โ€ข โญ Star Us


๐ŸŒŸ Philosophy: "Keep simple things simple, make complicated things possible"

This is the core principle that drives every design decision in ConnectOnion.

๐ŸŽฏ Living Our Philosophy

# Simple thing (2 lines) - Just works!
from connectonion import Agent
agent = Agent("assistant").input("Hello!")

# Complicated thing (still possible) - Production ready!
agent = Agent("production",
              model="gpt-5",                    # Latest models
              tools=[search, analyze, execute], # Your functions as tools
              system_prompt=company_prompt,     # Custom behavior
              max_iterations=10,                # Safety controls
              trust="prompt")                    # Multi-agent ready

โœจ What Makes ConnectOnion Special

  • ๐ŸŽฏ Simple API: Just one Agent class and your functions as tools
  • ๐Ÿš€ Production Ready: Battle-tested with GPT-5, Gemini 2.5, Claude Opus 4.1
  • ๐ŸŒ Open Source: MIT licensed, community-driven development
  • โšก No Boilerplate: Start building in 2 lines, not 200
  • ๐Ÿ”ง Extensible: Scale from prototypes to production systems

๐Ÿš€ Quick Start

Installation

pip install connectonion

Quickest Start - Use the CLI

# Create a new agent project with one command
co init

# Follow the prompts to set up your API key and run
cp .env.example .env  # Add your OpenAI API key
python agent.py

Manual Usage

import os  
from connectonion import Agent

# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "your-api-key-here"

# 1. Define tools as simple functions
def search(query: str) -> str:
    """Search for information."""
    return f"Found information about {query}"

def calculate(expression: str) -> float:
    """Perform mathematical calculations."""
    return eval(expression)  # Use safely in production

# 2. Create an agent with tools and personality
agent = Agent(
    name="my_assistant",
    system_prompt="You are a helpful and friendly assistant.",
    tools=[search, calculate]
    # max_iterations=10 is the default - agent will try up to 10 tool calls per task
)

# 3. Use the agent
result = agent.input("What is 25 * 4?")
print(result)  # Agent will use the calculate function

result = agent.input("Search for Python tutorials") 
print(result)  # Agent will use the search function

# 4. View behavior history (automatic!)
print(agent.history.summary())

๐Ÿ”ง Core Concepts

Agent

The main class that orchestrates LLM calls and tool usage. Each agent:

  • Has a unique name for tracking purposes
  • Can be given a custom personality via system_prompt
  • Automatically converts functions to tools
  • Records all behavior to JSON files

Function-Based Tools

NEW: Just write regular Python functions! ConnectOnion automatically converts them to tools:

def my_tool(param: str, optional_param: int = 10) -> str:
    """This docstring becomes the tool description."""
    return f"Processed {param} with value {optional_param}"

# Use it directly - no wrapping needed!
agent = Agent("assistant", tools=[my_tool])

Key features:

  • Automatic Schema Generation: Type hints become OpenAI function schemas
  • Docstring Integration: First line becomes tool description
  • Parameter Handling: Supports required and optional parameters
  • Type Conversion: Handles different return types automatically

System Prompts

Define your agent's personality and behavior with flexible input options:

# 1. Direct string prompt
agent = Agent(
    name="helpful_tutor",
    system_prompt="You are an enthusiastic teacher who loves to educate.",
    tools=[my_tools]
)

# 2. Load from file (any text file, no extension restrictions)
agent = Agent(
    name="support_agent",
    system_prompt="prompts/customer_support.md"  # Automatically loads file content
)

# 3. Using Path object
from pathlib import Path
agent = Agent(
    name="coder",
    system_prompt=Path("prompts") / "senior_developer.txt"
)

# 4. None for default prompt
agent = Agent("basic_agent")  # Uses default: "You are a helpful assistant..."

Example prompt file (prompts/customer_support.md):

# Customer Support Agent

You are a senior customer support specialist with expertise in:
- Empathetic communication
- Problem-solving
- Technical troubleshooting

## Guidelines
- Always acknowledge the customer's concern first
- Look for root causes, not just symptoms
- Provide clear, actionable solutions

Logging

Automatic logging of all agent activities including:

  • User inputs and agent responses
  • LLM calls with timing
  • Tool executions with parameters and results
  • Default storage in .co/logs/{name}.log (human-readable format)

๐ŸŽฏ Example Tools

You can still use the traditional Tool class approach, but the new functional approach is much simpler:

Traditional Tool Classes (Still Supported)

from connectonion.tools import Calculator, CurrentTime, ReadFile

agent = Agent("assistant", tools=[Calculator(), CurrentTime(), ReadFile()])

New Function-Based Approach (Recommended)

def calculate(expression: str) -> float:
    """Perform mathematical calculations."""
    return eval(expression)  # Use safely in production

def get_time(format: str = "%Y-%m-%d %H:%M:%S") -> str:
    """Get current date and time."""
    from datetime import datetime
    return datetime.now().strftime(format)

def read_file(filepath: str) -> str:
    """Read contents of a text file."""
    with open(filepath, 'r') as f:
        return f.read()

# Use them directly!
agent = Agent("assistant", tools=[calculate, get_time, read_file])

The function-based approach is simpler, more Pythonic, and easier to test!

๐ŸŽจ CLI Templates

ConnectOnion CLI provides templates to get you started quickly:

# Basic agent with ConnectOnion knowledge
co init

# Conversational chat agent
co init --template chat

# Data analysis agent
co init --template data

# Web automation with Playwright
co init --template playwright

Each template includes:

  • Pre-configured agent with relevant tools
  • Customizable system prompt in prompt.md
  • Environment configuration template
  • Embedded ConnectOnion documentation

Learn more in the CLI Documentation and Templates Guide.

๐Ÿ”จ Creating Custom Tools

The simplest way is to use functions (recommended):

def weather(city: str) -> str:
    """Get current weather for a city."""
    # Your weather API logic here
    return f"Weather in {city}: Sunny, 22ยฐC"

# That's it! Use it directly
agent = Agent(name="weather_agent", tools=[weather])

Or use the Tool class for more control:

from connectonion.tools import Tool

class WeatherTool(Tool):
    def __init__(self):
        super().__init__(
            name="weather",
            description="Get current weather for a city"
        )
    
    def run(self, city: str) -> str:
        return f"Weather in {city}: Sunny, 22ยฐC"
    
    def get_parameters_schema(self):
        return {
            "type": "object",
            "properties": {
                "city": {"type": "string", "description": "City name"}
            },
            "required": ["city"]
        }

agent = Agent(name="weather_agent", tools=[WeatherTool()])

๐Ÿ“ Project Structure

connectonion/
โ”œโ”€โ”€ connectonion/
โ”‚   โ”œโ”€โ”€ __init__.py         # Main exports
โ”‚   โ”œโ”€โ”€ agent.py            # Agent class
โ”‚   โ”œโ”€โ”€ tools.py            # Tool interface and built-ins
โ”‚   โ”œโ”€โ”€ llm.py              # LLM interface and OpenAI implementation
โ”‚   โ”œโ”€โ”€ console.py          # Terminal output and logging
โ”‚   โ””โ”€โ”€ cli/                # CLI module
โ”‚       โ”œโ”€โ”€ main.py         # CLI commands
โ”‚       โ”œโ”€โ”€ docs.md         # Embedded documentation
โ”‚       โ””โ”€โ”€ templates/      # Agent templates
โ”‚           โ”œโ”€โ”€ basic_agent.py
โ”‚           โ”œโ”€โ”€ chat_agent.py
โ”‚           โ”œโ”€โ”€ data_agent.py
โ”‚           โ””โ”€โ”€ *.md        # Prompt templates
โ”œโ”€โ”€ docs/                   # Documentation
โ”‚   โ”œโ”€โ”€ getting-started.md
โ”‚   โ”œโ”€โ”€ cli.md
โ”‚   โ”œโ”€โ”€ templates.md
โ”‚   โ””โ”€โ”€ ...
โ”œโ”€โ”€ examples/
โ”‚   โ””โ”€โ”€ basic_example.py
โ”œโ”€โ”€ tests/
โ”‚   โ””โ”€โ”€ test_agent.py
โ””โ”€โ”€ requirements.txt

๐Ÿงช Running Tests

python -m pytest tests/

Or run individual test files:

python -m unittest tests.test_agent

๐Ÿ“Š Automatic Logging

All agent activities are automatically logged to:

.co/logs/{agent_name}.log  # Default location

Each log entry includes:

  • Timestamp
  • User input
  • LLM calls with timing
  • Tool executions with parameters and results
  • Final responses

Control logging behavior:

# Default: logs to .co/logs/assistant.log
agent = Agent("assistant")

# Log to current directory
agent = Agent("assistant", log=True)  # โ†’ assistant.log

# Disable logging
agent = Agent("assistant", log=False)

# Custom log file
agent = Agent("assistant", log="my_logs/custom.log")

๐Ÿ”‘ Configuration

OpenAI API Key

Set your API key via environment variable:

export OPENAI_API_KEY="your-api-key-here"

Or pass directly to agent:

agent = Agent(name="test", api_key="your-api-key-here")

Model Selection

agent = Agent(name="test", model="gpt-5")  # Default: gpt-5-mini

Iteration Control

Control how many tool calling iterations an agent can perform:

# Default: 10 iterations (good for most tasks)
agent = Agent(name="assistant", tools=[...])

# Complex tasks may need more iterations
research_agent = Agent(
    name="researcher", 
    tools=[search, analyze, summarize, write_file],
    max_iterations=25  # Allow more steps for complex workflows
)

# Simple agents can use fewer iterations for safety
calculator = Agent(
    name="calc", 
    tools=[calculate],
    max_iterations=5  # Prevent runaway calculations
)

# Per-request override for specific complex tasks
result = agent.input(
    "Analyze all project files and generate comprehensive report",
    max_iterations=50  # Override for this specific task
)

When an agent reaches its iteration limit, it returns:

"Task incomplete: Maximum iterations (10) reached."

Choosing the Right Limit:

  • Simple tasks (1-3 tools): 5-10 iterations
  • Standard workflows: 10-15 iterations (default: 10)
  • Complex analysis: 20-30 iterations
  • Research/multi-step: 30+ iterations

๐Ÿ› ๏ธ Advanced Usage

Multiple Tool Calls

Agents can chain multiple tool calls automatically:

result = agent.input(
    "Calculate 15 * 8, then tell me what time you did this calculation"
)
# Agent will use calculator first, then current_time tool

Custom LLM Providers

from connectonion.llm import LLM

class CustomLLM(LLM):
    def complete(self, messages, tools=None):
        # Your custom LLM implementation
        pass

agent = Agent(name="test", llm=CustomLLM())

๐Ÿšง Current Limitations (MVP)

This is an MVP version with intentional limitations:

  • Single LLM provider (OpenAI)
  • Synchronous execution only
  • JSON file storage only
  • Basic error handling
  • No multi-agent collaboration

๐Ÿ—บ๏ธ Future Roadmap

  • Multiple LLM provider support (Anthropic, Local models)
  • Async/await support
  • Database storage options
  • Advanced memory systems
  • Multi-agent collaboration
  • Web interface for behavior monitoring
  • Plugin system for tools

๐Ÿ”— Connect With Us

Discord GitHub Documentation

๐Ÿค Contributing

We welcome contributions! ConnectOnion is open source and community-driven.

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Submit a pull request

See our Contributing Guide for more details.

๐Ÿ“„ License

MIT License - Use it anywhere, even commercially. See LICENSE file for details.


๐Ÿ’ซ Remember

"Keep simple things simple, make complicated things possible"

Built with โค๏ธ by the open-source community

โฌ† Back to top

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

connectonion-0.2.5.tar.gz (172.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

connectonion-0.2.5-py3-none-any.whl (213.3 kB view details)

Uploaded Python 3

File details

Details for the file connectonion-0.2.5.tar.gz.

File metadata

  • Download URL: connectonion-0.2.5.tar.gz
  • Upload date:
  • Size: 172.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.7

File hashes

Hashes for connectonion-0.2.5.tar.gz
Algorithm Hash digest
SHA256 673af484ce842393bed6d8270aeb33c3b5eb17b9386081fc626b1bb1778e278c
MD5 90baf5201478c8f3d2b225273a0ee45a
BLAKE2b-256 fac949dff2d7d92128382cfec38cc2252cb58e995a9aae42f9a9ea3432f256e9

See more details on using hashes here.

File details

Details for the file connectonion-0.2.5-py3-none-any.whl.

File metadata

  • Download URL: connectonion-0.2.5-py3-none-any.whl
  • Upload date:
  • Size: 213.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.7

File hashes

Hashes for connectonion-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 aaa1912f8dc4bcc33f02b5ed1578b5f261b87e8068c3371e2b1cbe70e2ca8531
MD5 98883bda10dc3c079f37d6b34d76ef18
BLAKE2b-256 2a6a5cfef138c4be518d583b29382769596f53d71d3014e0e2f7d62942115adc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page