Skip to main content

Copilot Extended - Resilient wrapper for GitHub Copilot SDK with auto-retry, Ralph Wiggum loops, and more

Project description

Copex - Copilot Extended

PyPI version Python 3.10+ License: MIT Tests

A resilient Python wrapper for the GitHub Copilot SDK with automatic retry, Ralph Wiggum loops, session persistence, metrics, parallel tools, and MCP integration.

Features

  • 🔄 Automatic Retry - Handles 500 errors, rate limits, and transient failures with exponential backoff
  • 🚀 Auto-Continue - Automatically sends "Keep going" on any error
  • 🔁 Ralph Wiggum Loops - Iterative AI development with completion promises
  • 💾 Session Persistence - Save/restore conversation history to disk
  • 📍 Checkpointing - Resume interrupted Ralph loops after crashes
  • 📊 Metrics & Logging - Track token usage, timing, and costs
  • Parallel Tools - Execute multiple tool calls concurrently
  • 🔌 MCP Integration - Connect to external MCP servers for extended capabilities
  • 🎯 Model Selection - Easy switching between GPT-5.2-codex, Claude, Gemini, and more
  • 🧠 Reasoning Effort - Configure reasoning depth from none to xhigh
  • 💻 Beautiful CLI - Rich terminal output with markdown rendering
  • 🖥️ TUI - Full-screen terminal UI with command palette (copex tui)

Installation

pip install copex

Or install from source:

git clone https://github.com/Arthur742Ramos/copex
cd copex
pip install -e .

Prerequisites

Note: Copex automatically detects the Copilot CLI path on Windows, macOS, and Linux. If auto-detection fails, you can specify the path manually:

config = CopexConfig(cli_path="/path/to/copilot")

Or check detection:

from copex import find_copilot_cli
print(f"Found CLI at: {find_copilot_cli()}")

Quick Start

Python API

import asyncio
from copex import Copex, CopexConfig, Model, ReasoningEffort

async def main():
    # Simple usage with defaults (gpt-5.2-codex, xhigh reasoning)
    async with Copex() as copex:
        response = await copex.chat("Explain async/await in Python")
        print(response)

    # Custom configuration
    config = CopexConfig(
        model=Model.GPT_5_2_CODEX,
        reasoning_effort=ReasoningEffort.XHIGH,
        retry={"max_retries": 10, "base_delay": 2.0},
        auto_continue=True,
    )
    
    async with Copex(config) as copex:
        # Get full response object with metadata
        response = await copex.send("Write a binary search function")
        print(f"Content: {response.content}")
        print(f"Reasoning: {response.reasoning}")
        print(f"Retries needed: {response.retries}")

asyncio.run(main())

Ralph Wiggum Loops

The Ralph Wiggum technique enables iterative AI development:

from copex import Copex, RalphWiggum

async def main():
    async with Copex() as copex:
        ralph = RalphWiggum(copex)
        
        result = await ralph.loop(
            prompt="Build a REST API with CRUD operations and tests",
            completion_promise="ALL TESTS PASSING",
            max_iterations=30,
        )
        
        print(f"Completed in {result.iteration} iterations")
        print(f"Reason: {result.completion_reason}")

How it works:

  1. The same prompt is fed to the AI repeatedly
  2. The AI sees its previous work in conversation history
  3. It iteratively improves until outputting <promise>COMPLETION TEXT</promise>
  4. Loop ends when promise matches or max iterations reached

Terminal UI (TUI)

Run the full-screen terminal UI:

copex tui --model gpt-5.2-codex --reasoning xhigh

Key bindings (highlights):

  • Ctrl+P: command palette
  • Ctrl+J: insert newline (multiline input)
  • Enter: send
  • Esc: close palette / go back
  • Ctrl+C: cancel streaming
  • Ctrl+Q: quit

Skills, Instructions & MCP

Copex is fully compatible with Copilot SDK features:

from copex import Copex, CopexConfig

config = CopexConfig(
    model=Model.GPT_5_2_CODEX,
    reasoning_effort=ReasoningEffort.XHIGH,
    
    # Enable skills
    skills=["code-review", "api-design", "security"],
    
    # Custom instructions
    instructions="Follow PEP 8. Use type hints. Prefer dataclasses.",
    # Or load from file:
    # instructions_file=".copilot/instructions.md",
    
    # MCP servers (inline or from file)
    mcp_servers=[
        {"name": "github", "url": "https://api.github.com/mcp/"},
    ],
    # mcp_config_file=".copex/mcp.json",
    
    # Tool filtering
    available_tools=["repos", "issues", "code_security"],
    excluded_tools=["delete_repo"],
)

async with Copex(config) as copex:
    response = await copex.chat("Review this code for security issues")

Streaming

async def stream_example():
    async with Copex() as copex:
        async for chunk in copex.stream("Write a REST API"):
            if chunk.type == "message":
                print(chunk.delta, end="", flush=True)
            elif chunk.type == "reasoning":
                print(f"[thinking: {chunk.delta}]", end="")

CLI Usage

Single prompt

# Basic usage
copex chat "Explain Docker containers"

# With options
copex chat "Write a Python web scraper" \
    --model gpt-5.2-codex \
    --reasoning xhigh \
    --max-retries 10

# From stdin (for long prompts)
cat prompt.txt | copex chat

# Show reasoning output
copex chat "Solve this algorithm" --show-reasoning

# Raw output (for piping)
copex chat "Write a bash script" --raw > script.sh

Ralph Wiggum loop

# Run iterative development loop
copex ralph "Build a calculator with tests" --promise "ALL TESTS PASSING" -n 20

# Without completion promise (runs until max iterations)
copex ralph "Improve code coverage" --max-iterations 10

Interactive mode

copex interactive

# With specific model
copex interactive --model claude-sonnet-4.5 --reasoning high

Interactive slash commands:

  • /model <name> - Change model
  • /reasoning <level> - Change reasoning effort
  • /models - List available models
  • /new - Start a new session
  • /status - Show current settings
  • /tools - Toggle full tool call list
  • /help - Show commands

Other commands

# List available models
copex models

# Create default config file
copex init

# List available skills (auto-discovered)
copex skills list

# Show skill content
copex skills show code-review

Skills Management

Copex auto-discovers skills from:

  • .github/skills/ (in repo)
  • .claude/skills/ (in repo, Claude Code compatibility)
  • .copex/skills/ (in repo)
  • ~/.config/copex/skills/ (personal skills)
# List all discovered skills
copex skills list

# Show a specific skill
copex skills show my-skill

# Add explicit skill directory
copex chat "Do something" --skill-dir ./my-skills

# Disable a specific skill
copex chat "Do something" --disable-skill broken-skill

# Disable auto-discovery
copex chat "Do something" --no-auto-skills

The same flags work on interactive, ralph, and plan commands.

Configuration

Create a config file at ~/.config/copex/config.toml:

model = "gpt-5.2-codex"
reasoning_effort = "xhigh"
streaming = true
timeout = 300.0
auto_continue = true
continue_prompt = "Keep going"

# Skills to enable (named skills)
skills = ["code-review", "api-design", "test-writer"]

# Skills auto-discovery
auto_discover_skills = true  # Auto-discover from repo and user dirs
skill_directories = []       # Explicit skill directories to add
disabled_skills = []         # Skills to disable by name

# Custom instructions (inline or file path)
instructions = "Follow our team coding standards. Prefer functional programming."
# instructions_file = ".copilot/instructions.md"

# MCP server config file
# mcp_config_file = ".copex/mcp.json"

# Tool filtering
# available_tools = ["repos", "issues", "code_security"]
excluded_tools = []

[retry]
max_retries = 5
retry_on_any_error = true
base_delay = 1.0
max_delay = 30.0
exponential_base = 2.0

Available Models

Model Description
gpt-5.2-codex Latest Codex model (default)
gpt-5.1-codex Previous Codex version
gpt-5.1-codex-max High-capacity Codex
gpt-5.1-codex-mini Fast, lightweight Codex
claude-sonnet-4.5 Claude Sonnet 4.5
claude-sonnet-4 Claude Sonnet 4
claude-opus-4.5 Claude Opus (premium)
gemini-3-pro-preview Gemini 3 Pro

Reasoning Effort Levels

Level Description
none No extended reasoning
low Minimal reasoning
medium Balanced reasoning
high Deep reasoning
xhigh Maximum reasoning (best for complex tasks)

Note: xhigh is only supported for GPT/Codex models gpt-5.2+ (e.g. gpt-5.2, gpt-5.2-codex, and higher). If you request xhigh on other models (e.g. Claude), Copex will downgrade to high and emit a warning.

Error Handling

By default, Copex retries on any error (retry_on_any_error=True).

You can also be specific:

config = CopexConfig(
    retry={
        "retry_on_any_error": False,
        "max_retries": 10,
        "retry_on_errors": ["500", "timeout", "rate limit"],
    }
)

Credits

Contributing

Contributions welcome! Please open an issue or PR at github.com/Arthur742Ramos/copex.

License

MIT


Advanced Features

Session Persistence

Save and restore conversation history:

from copex import Copex, SessionStore, PersistentSession

store = SessionStore()  # Saves to ~/.copex/sessions/

# Create a persistent session
session = PersistentSession("my-project", store)

async with Copex() as copex:
    response = await copex.chat("Hello!")
    session.add_user_message("Hello!")
    session.add_assistant_message(response)
    # Auto-saved to disk

# Later, restore it
session = PersistentSession("my-project", store)
print(session.messages)  # Previous messages loaded

Checkpointing (Crash Recovery)

Resume Ralph loops after interruption:

from copex import Copex, CheckpointStore, CheckpointedRalph

store = CheckpointStore()  # Saves to ~/.copex/checkpoints/

async with Copex() as copex:
    ralph = CheckpointedRalph(copex, store, loop_id="my-api-project")
    
    # Automatically resumes from last checkpoint if interrupted
    result = await ralph.loop(
        prompt="Build a REST API with tests",
        completion_promise="ALL TESTS PASSING",
        max_iterations=30,
        resume=True,  # Resume from checkpoint
    )

Metrics & Cost Tracking

Track token usage and estimate costs:

from copex import Copex, MetricsCollector

collector = MetricsCollector()

async with Copex() as copex:
    # Track a request
    req = collector.start_request(
        model="gpt-5.2-codex",
        prompt="Write a function..."
    )
    
    response = await copex.chat("Write a function...")
    
    collector.complete_request(
        req.request_id,
        success=True,
        response=response,
    )

# Get summary
print(collector.print_summary())
# Session: 20260117_170000
# Requests: 5 (5 ok, 0 failed)
# Success Rate: 100.0%
# Total Tokens: 12,450
# Estimated Cost: $0.0234

# Export metrics
collector.export_json("metrics.json")
collector.export_csv("metrics.csv")

Parallel Tools

Execute multiple tools concurrently:

from copex import Copex, ParallelToolExecutor

executor = ParallelToolExecutor()

@executor.tool("get_weather", "Get weather for a city")
async def get_weather(city: str) -> str:
    return f"Weather in {city}: Sunny, 72°F"

@executor.tool("get_time", "Get time in timezone")
async def get_time(timezone: str) -> str:
    return f"Time in {timezone}: 2:30 PM"

# Tools execute in parallel when AI calls multiple at once
async with Copex() as copex:
    response = await copex.send(
        "What's the weather in Seattle and the time in PST?",
        tools=executor.get_tool_definitions(),
    )

MCP Server Integration

Connect to external MCP servers:

from copex import Copex, MCPManager, MCPServerConfig

manager = MCPManager()

# Add MCP servers
manager.add_server(MCPServerConfig(
    name="github",
    command="npx",
    args=["-y", "@github/mcp-server"],
    env={"GITHUB_TOKEN": "..."},
))

manager.add_server(MCPServerConfig(
    name="filesystem",
    command="npx", 
    args=["-y", "@anthropic/mcp-server-filesystem", "/path/to/dir"],
))

await manager.connect_all()

# Get all tools from all servers
all_tools = manager.get_all_tools()

# Call a tool
result = await manager.call_tool("github:search_repos", {"query": "copex"})

await manager.disconnect_all()

MCP Config File (~/.copex/mcp.json):

{
  "servers": {
    "github": {
      "command": "npx",
      "args": ["-y", "@github/mcp-server"],
      "env": {"GITHUB_TOKEN": "your-token"}
    },
    "browser": {
      "command": "npx",
      "args": ["-y", "@anthropic/mcp-server-puppeteer"]
    }
  }
}
from copex import load_mcp_config, MCPManager

configs = load_mcp_config()  # Loads from ~/.copex/mcp.json
manager = MCPManager()
for config in configs:
    manager.add_server(config)
await manager.connect_all()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

copex-1.0.3.tar.gz (95.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

copex-1.0.3-py3-none-any.whl (108.2 kB view details)

Uploaded Python 3

File details

Details for the file copex-1.0.3.tar.gz.

File metadata

  • Download URL: copex-1.0.3.tar.gz
  • Upload date:
  • Size: 95.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for copex-1.0.3.tar.gz
Algorithm Hash digest
SHA256 6dfdc13c45a613b0c3a9aea6a702a9a94b4463d9b8b3c05d770e90008dd3ca56
MD5 71468b72cd3d6db8e8eeb79278170ee2
BLAKE2b-256 b33e3c7df908c3d2ec592c2a220f3d338e83585ba2f017d492e50214ba7797e8

See more details on using hashes here.

File details

Details for the file copex-1.0.3-py3-none-any.whl.

File metadata

  • Download URL: copex-1.0.3-py3-none-any.whl
  • Upload date:
  • Size: 108.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for copex-1.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 7e83d1c4b46b0968fd0644b81341646f9e52af62315252d739f96788ff16c55d
MD5 eb4d53193fb59ec2c80189eee22e57ca
BLAKE2b-256 325442aa4b5853d8a2d3dbfded0fc894f15714cb7af52b96a05207ad8a2e155c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page