A modular AI coding agent framework with multi-LLM support, MCP plugins, and team collaboration
Project description
Agency CLI
A modular AI coding agent framework that brings autonomous coding capabilities to your terminal. With support for multiple LLM providers, MCP plugins, fine-grained permission control, and team collaboration, Agency transforms how developers interact with AI assistants.
Overview
Agency is an intelligent command-line agent that leverages Large Language Models to autonomously execute development tasks. Unlike traditional chat-based AI assistants, Agency integrates deeply with your filesystem, git repository, and development tools, enabling it to understand context, manage tasks, and perform complex multi-step operations.
Key Differentiators
| Feature | Description |
|---|---|
| Autonomous Execution | Execute multiple tool calls in sequence without repeated user input |
| Multi-Provider LLM | Seamlessly switch between Anthropic, OpenAI, Ollama, and custom gateways |
| MCP Integration | Extend capabilities via the Model Context Protocol plugin ecosystem |
| Team Collaboration | Spawn multiple AI teammates that communicate and coordinate |
| Git Worktree | Parallel task execution in isolated Git branches |
| Risk-Aware | Intelligent permission system with Bash command validation |
Installation
pip install agency-cli
Prerequisites
- Python 3.9 or higher
- An LLM API key (Anthropic, OpenAI, or compatible gateway)
Quick Start
1. Configure API Credentials
# Interactive configuration wizard
agency config init --workspace /path/to/your/project
# Or set values directly
agency config set \
--provider anthropic \
--model claude-3-5-sonnet-20241022 \
--api-key your-api-key
2. Launch the Agent
# Start in current directory
agency
# Start in specific workspace
agency run --workspace /path/to/project
# Plan mode (read-only, no modifications)
agency run --workspace /path/to/project --permission-mode plan
3. Example Interactions
❯ Fix the authentication bug in src/auth.py
❯ Create a new REST API endpoint for user management
❯ Write unit tests for the payment module
❯ Refactor the database layer to use transactions
Core Features
Multi-LLM Support
Connect to various LLM providers through a unified interface:
# Anthropic (recommended)
agency config set --provider anthropic --model claude-3-5-sonnet-20241022
# OpenAI
agency config set --provider litellm --model gpt-4-turbo
# Local Ollama
agency config set --provider litellm --model ollama/llama3 --base-url http://localhost:11434
Permission Modes
Agency provides three permission modes to control agent behavior:
| Mode | Behavior |
|---|---|
default |
Write operations require confirmation; high-risk operations always prompt |
plan |
Read-only mode; no file modifications or command execution |
auto |
Auto-execute non-high-risk operations; high-risk still requires confirmation |
agency run --permission-mode auto
Built-in Tools (30+)
File Operations
read_file- Read file contents with optional line limitswrite_file- Create or overwrite filesedit_file- Intelligent file editing with text matching
Command Execution
bash- Run shell commands with security validationbackground_run- Execute long-running commands in backgroundbackground_list/background_get- Manage background tasks
Task Management
task_create- Create tasks with descriptions and rolestask_list/task_get/task_update- Full task lifecycle managementtodo_write- Session-level plan tracking
Scheduling
schedule_create- Cron-based task schedulingschedule_list/schedule_delete- Manage scheduled tasks
Memory & Skills
save_memory- Persist project knowledge across sessionsload_skill- Load reusable skill modules
Team Collaboration
teammate_spawn- Spawn additional AI agentsteammate_list- View active teammatesmessage_send/inbox_read- Inter-agent communicationbroadcast- Send messages to multiple teammatesshutdown_request/shutdown_respond- Teammate lifecycle management
Plan Approval
plan_approval_request- Request approval for implementation plansplan_approval_respond- Respond to approval requests
Git Worktree
worktree_create- Create isolated Git worktreesworktree_list/worktree_bind/worktree_closeout- Worktree lifecycle
MCP Plugin Integration
Extend Agency's capabilities by connecting MCP (Model Context Protocol) servers. Configure plugins in .claude-plugin/plugin.json:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem"],
"env": {}
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "your-token"
}
}
}
}
Bash Security
Agency validates all shell commands before execution:
- Dangerous patterns:
rm -rf,sudo, shell metacharacter injection - Risk classification: read / write / high-risk
- Confirmation prompts: High-risk operations require explicit user approval
Architecture
agency/
├── cli.py # CLI entry point
├── runtime/
│ ├── agent_loop.py # Core REPL loop with streaming support
│ └── bootstrap.py # Agent factory
├── core/
│ ├── config.py # Environment configuration
│ ├── compact.py # Context compaction & output persistence
│ ├── managers.py # TodoManager for plan tracking
│ ├── hooks/ # PreToolUse, PostToolUse, SessionStart
│ ├── permissions/ # CapabilityPermissionGate, BashSecurityValidator
│ └── prompt/ # System prompt construction
├── capabilities/
│ ├── background/ # Background task service
│ ├── memory/ # Persistent memory with dream() interface
│ ├── scheduler/ # Cron-based task scheduler
│ ├── skills/ # Skill loading system
│ ├── tasks/ # Task state machine
│ ├── team/ # Multi-agent communication protocol
│ └── worktree/ # Git worktree management
├── connectors/
│ ├── fs/ # Safe file system operations
│ ├── llm/ # LLM provider adapters
│ ├── mcp/ # MCP protocol client
│ └── shell/ # Shell command execution
├── tools/
│ ├── registry.py # Tool definitions
│ └── dispatcher.py # Tool call dispatcher
└── ui/
├── completer.py # Tab completion
├── input.py # Key bindings
├── pager.py # Paginated output
└── progress.py # Progress indicators
Configuration
Environment Variables
| Variable | Description | Default |
|---|---|---|
AGENCY_LLM_PROVIDER |
LLM provider (litellm or anthropic) |
litellm |
AGENCY_LLM_MODEL |
Model identifier | Required |
AGENCY_LLM_API_KEY |
API key | Required |
AGENCY_LLM_BASE_URL |
Custom gateway URL | Optional |
Workspace Structure
When running, Agency creates the following directories:
| Directory | Purpose |
|---|---|
.env |
LLM API configuration |
.memory/ |
Persistent memory storage |
.tasks/ |
Task state and event logs |
.runtime-tasks/ |
Background task status |
.claude/scheduled_tasks.json |
Scheduled task configuration |
.team/ |
Team member data and messages |
.worktrees/ |
Git worktree environments |
.hooks.json |
Custom hook configurations |
.transcripts/ |
Conversation logs |
CLI Commands
agency run [options] Start interactive agent
--permission-mode MODE default | plan | auto (default: default)
--model MODEL Override default model
--workspace PATH Working directory (default: current)
agency config set Write settings to .env
--provider PROVIDER litellm | anthropic
--model MODEL Model identifier
--api-key KEY API key
--base-url URL Custom gateway URL
agency config show Display current configuration
agency config init Interactive configuration wizard
agency config unset Remove settings from .env
REPL Commands
| Command | Description |
|---|---|
/tools |
List all available tools |
/mcp |
Show MCP server status |
/plan |
Display current plan/tasks |
/exit or /quit |
End session |
Hooks System
Extend Agency's behavior with custom hooks:
{
"PreToolUse": {
"command": "python check_permission.py"
},
"PostToolUse": {
"command": "python log_tool.py"
},
"SessionStart": {
"command": "python on_start.py"
}
}
Development
Local Setup
git clone <repository>
cd agency
pip install -e .
Code Quality
# Format code
black src/
# Lint
ruff check src/
# Type check
mypy src/
# Run tests
pytest tests/
Publish New Version
# Update version in pyproject.toml
git add .
git commit -m "Release v0.2.0"
git tag v0.2.0
git push origin v0.2.0
The GitHub Actions workflow will automatically build and publish to PyPI.
Dependencies
anthropic>=0.54.0 # Anthropic API SDK
python-dotenv>=1.0.1 # .env file support
litellm>=1.48.0 # Multi-LLM provider adapter
rich>=13.9.0 # Terminal output formatting
prompt_toolkit>=3.0.41 # REPL interface
blessed>=1.19.0 # Terminal capabilities
License
MIT License - see LICENSE for details.
Disclaimer
Agency executes file writes, edits, and shell commands. Use only in trusted environments and carefully confirm all high-risk operations.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agency_cli-0.2.0.tar.gz.
File metadata
- Download URL: agency_cli-0.2.0.tar.gz
- Upload date:
- Size: 53.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c8a47980718e353a38ddc52ed2be144f156a5e641ae40ed4867d2b30a6c964f3
|
|
| MD5 |
659d9039e8d3a8dbc542fdd242818468
|
|
| BLAKE2b-256 |
e79693294095f519e9d92a717ef455ff763fe37b90b11ea26497311c04591f63
|
File details
Details for the file agency_cli-0.2.0-py3-none-any.whl.
File metadata
- Download URL: agency_cli-0.2.0-py3-none-any.whl
- Upload date:
- Size: 62.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
11b73697e94174f0dda24849fd3081c2aa081ee0a121ea8b5f0049787c2b9447
|
|
| MD5 |
f9c85e4ff8b9de12e8311cce8cfa55c0
|
|
| BLAKE2b-256 |
81b933d26c96b067a5814eb511eaac5ccd2ce9d6bdaef1a4ee56b9bfac1c5f7c
|