Multi-provider AI coding assistant CLI with 13+ providers
Project description
DevOrch
A multi-provider AI coding assistant CLI, similar to Claude Code and Gemini CLI.
Features
- 13 AI Providers - OpenAI, Anthropic, Gemini, Groq, Mistral, Together AI, OpenRouter, GitHub Copilot, DeepSeek, Kimi, Ollama, LM Studio, and Custom
- Custom Provider Support - Connect to ANY OpenAI-compatible API (vLLM, TGI, llama.cpp, etc.)
- Dynamic Model Listing - Fetches latest available models from provider APIs
- Secure API Key Storage - Uses system keychain (Windows Credential Manager, macOS Keychain, Linux Secret Service)
- Session Persistence - SQLite-based chat history with resume capability
- Powerful Tools - Shell, terminal sessions, file operations, search, grep, code editing, and web access
- Interactive UI - Arrow-key navigation for selections, syntax-highlighted output
- Permission System - Configurable allow/deny rules with interactive prompts
- Terminal Session Management - Run long-running servers and background processes
- Multiple Modes - Plan mode, Auto mode, and Ask mode
Installation
Option 1: Install with pipx (Recommended)
# Install pipx if you don't have it
pip install pipx
# Install DevOrch
pipx install devorch
# DevOrch is now available globally
devorch --help
Option 2: Install with pip
pip install devorch
Option 3: Install from source
# Clone the repository
git clone https://github.com/Amanbig/DevOrch.git
cd DevOrch
# Install with pip
pip install -e .
Why pipx?
- ✅ Isolated environment (no conflicts with other packages)
- ✅ Available globally like
npm install -g - ✅ Easy to uninstall:
pipx uninstall devorch - ✅ Easy to upgrade:
pipx upgrade devorch
Quick Start
# Start DevOrch (first run will show interactive setup)
devorch
# Or specify a provider
devorch -p openai
devorch -p anthropic
devorch -p groq
devorch -p local # Ollama
Interactive Onboarding
On first run, DevOrch guides you through setup with an interactive UI:
╭─────────────────────────────────────────────────╮
│ Welcome to DevOrch! │
│ │
│ Let's set up your AI provider to get started. │
╰─────────────────────────────────────────────────╯
? Select your AI provider: (Use arrow keys)
❯ OpenAI (GPT-4o, GPT-4, etc.)
Anthropic (Claude Sonnet, Opus, etc.)
Google Gemini (Gemini Pro, Flash, etc.)
Groq (Ultra-fast Llama, Mixtral)
──────────────
Ollama - Local (No API key needed)
LM Studio - Local (No API key needed)
Usage
Interactive REPL
devorch # Start interactive session
devorch -p groq # Use specific provider
devorch -m gpt-4o # Use specific model
devorch --resume abc123 # Resume a previous session
Slash Commands
| Command | Description |
|---|---|
/help |
Show available commands |
/mode |
Interactive mode selection (plan/auto/ask) |
/model |
Interactive model selection |
/provider |
Interactive provider switching |
/models |
List available models for current provider |
/providers |
List all available providers |
/status |
Show current provider, model, and mode |
/session |
Show current session info |
/history |
Show conversation history |
/clear |
Clear conversation history |
/compact |
Summarize and compact history |
/save |
Save conversation to file |
/undo |
Undo last message |
/tasks |
Show current task list |
Modes
- ASK (default) - Asks before each tool execution
- AUTO - Executes tools automatically (dangerous commands still blocked)
- PLAN - Shows plan before executing, asks for approval
Switch modes interactively:
? Select mode: (Use arrow keys)
❯ PLAN - Shows plan before executing, asks for approval
AUTO - Executes tools automatically (trusted mode)
ASK - Asks before each tool execution (default)
Tool Permissions
DevOrch uses an interactive permission system with arrow-key navigation:
╭─────────── Permission Required ───────────╮
│ Tool: shell │
│ Command: npm create vite@latest │
╰───────────────────────────────────────────╯
? Choose an action: (Use arrow keys)
❯ Allow once
Allow for this session
Always allow (save to config)
Deny
Tool Output Display
Tool calls are displayed in a clean, compact format:
╭──────────── Shell ────────────╮
│ npm create vite@latest my-app │
╰───────────────────────────────╯
╭─────────── Output ────────────╮
│ STDOUT: │
│ Scaffolding project in ./my- │
│ app │
│ Done! │
╰───────────────────────────────╯
> write 45 lines to src/App.tsx
✓ Successfully wrote 45 lines to src/App.tsx
> read package.json
✓ Read 32 lines
New Provider Features
GitHub Copilot Integration
Use your GitHub Copilot subscription to access multiple premium models:
# Get GitHub token with 'copilot' scope from:
# https://github.com/settings/tokens
export GITHUB_TOKEN=ghp_your_token
# Use Copilot
devorch -p github_copilot
devorch -p github_copilot -m claude-3.5-sonnet
Available models: GPT-4o, GPT-4o-mini, Claude 3.5 Sonnet, o1-preview, o1-mini
DeepSeek AI
Powerful reasoning and coding models from DeepSeek:
export DEEPSEEK_API_KEY=sk-...
devorch -p deepseek -m deepseek-reasoner
Models: deepseek-chat, deepseek-coder, deepseek-reasoner (R1)
Kimi (Moonshot AI)
Long context models with up to 128K tokens:
export MOONSHOT_API_KEY=sk-...
devorch -p kimi -m moonshot-v1-128k
Models: moonshot-v1-8k, moonshot-v1-32k, moonshot-v1-128k
Custom Providers
Connect to ANY OpenAI-compatible API:
Self-Hosted vLLM
# ~/.devorch/config.yaml
providers:
my_vllm:
default_model: meta-llama/Meta-Llama-3-70B-Instruct
base_url: http://localhost:8000/v1
# Start vLLM server
python -m vllm.entrypoints.openai.api_server \
--model meta-llama/Meta-Llama-3-70B-Instruct \
--port 8000
# Use it
devorch -p my_vllm
Text Generation Inference (TGI)
providers:
my_tgi:
default_model: mistralai/Mistral-7B-Instruct
base_url: http://localhost:8080/v1
llama.cpp Server
providers:
llamacpp:
default_model: llama-3-8b
base_url: http://localhost:8080/v1
Private API Endpoint
providers:
company_api:
default_model: custom-model-v1
base_url: https://api.company.com/v1
# Set CUSTOM_API_KEY environment variable
Dynamic Model Listing
All providers now fetch available models from their APIs automatically:
# List available models
devorch models list -p deepseek
devorch models list -p github_copilot
devorch models list -p my_vllm
Models are fetched in real-time from provider APIs, so you always see the latest available models!
Configuration
API Keys
# Store API keys securely in system keyring
devorch set-key openai
devorch set-key anthropic
devorch set-key groq
devorch set-key github_copilot # Uses GITHUB_TOKEN
devorch set-key deepseek
devorch set-key kimi
# Or use environment variables
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GOOGLE_API_KEY=...
export GROQ_API_KEY=gsk_...
export MISTRAL_API_KEY=...
export OPENROUTER_API_KEY=sk-or-...
export TOGETHER_API_KEY=...
export GITHUB_TOKEN=ghp_...
export DEEPSEEK_API_KEY=sk-...
export MOONSHOT_API_KEY=sk-...
View Configuration
devorch config
Configuration File
Create ~/.devorch/config.yaml to configure providers:
# Set default provider
default_provider: openai
# Configure each provider
providers:
openai:
default_model: gpt-4o
anthropic:
default_model: claude-sonnet-4-20250514
github_copilot:
default_model: gpt-4o
deepseek:
default_model: deepseek-chat
kimi:
default_model: moonshot-v1-32k
# Custom providers
my_vllm:
default_model: meta-llama/Meta-Llama-3-70B-Instruct
base_url: http://localhost:8000/v1
company_api:
default_model: custom-model-v1
base_url: https://api.company.com/v1
Note: Don't put API keys in config files! Use environment variables or keyring.
Session Management
devorch sessions list # List all sessions
devorch sessions show <id> # Show session details
devorch sessions delete <id> # Delete a session
devorch sessions clear # Delete all sessions
Permissions
devorch permissions list # Show permissions
devorch permissions set shell allow # Always allow shell commands
devorch permissions allow shell "git *" # Allow git commands
devorch permissions deny shell "rm -rf *" # Block dangerous commands
devorch permissions reset # Reset to defaults
Supported Providers
Cloud Providers
| Provider | Models | API Key | Notes |
|---|---|---|---|
| OpenAI | GPT-4o, GPT-4, o1-preview | OPENAI_API_KEY |
Full support with tool calling |
| Anthropic | Claude 4.5, Claude 3.5, Claude 3 | ANTHROPIC_API_KEY |
Best for coding tasks |
| Google Gemini | Gemini 2.0, 1.5 Pro/Flash | GOOGLE_API_KEY |
2M token context |
| Groq | Llama 3.3, Mixtral, Gemma | GROQ_API_KEY |
Ultra-fast inference |
| Mistral | Large, Medium, Codestral | MISTRAL_API_KEY |
Specialized for code |
| Together AI | Llama 3, Mixtral, Qwen | TOGETHER_API_KEY |
Open source models |
| OpenRouter | 100+ models | OPENROUTER_API_KEY |
Access many providers via one API |
Developer Tools
| Provider | Models | API Key | Notes |
|---|---|---|---|
| GitHub Copilot ⭐ | GPT-4o, Claude 3.5, o1 | GITHUB_TOKEN |
Requires Copilot subscription |
International Providers
| Provider | Models | API Key | Notes |
|---|---|---|---|
| DeepSeek ⭐ | Chat, Coder, Reasoner | DEEPSEEK_API_KEY |
Powerful reasoning models |
| Kimi (Moonshot) ⭐ | 8K, 32K, 128K | MOONSHOT_API_KEY |
Long context (128K tokens) |
Local & Self-Hosted
| Provider | Models | Setup | Notes |
|---|---|---|---|
| Ollama | Llama 3, Mistral, CodeLlama | Install Ollama | Run models locally |
| LM Studio | Any GGUF model | Install LM Studio | GUI for local models |
| Custom ⭐ | Your choice | Configure endpoint | vLLM, TGI, llama.cpp, etc. |
⭐ = New providers
Tools
DevOrch has access to these tools:
| Tool | Description |
|---|---|
| shell | Execute shell commands (quick commands with output capture) |
| open_terminal ⭐ | Open new terminal window for interactive/long-running commands |
| terminal_session ⭐ | Managed background sessions (start, read, send input, stop) |
| filesystem | Read, write, and list files |
| search | Find files by name patterns (glob) |
| grep | Search for text patterns in files |
| edit | Make targeted edits to existing files |
| task | Track progress on multi-step work with visual task list |
| websearch | Search the web for current information (uses DuckDuckGo) |
| webfetch | Fetch and read content from a URL |
Terminal Session Management
Run long-running servers and interact with them without blocking the chat:
# LLM can start a dev server in background
> terminal_session start vite_server "npm run dev"
✓ Session 'vite_server' started (PID 12345)
# Continue chatting while server runs
# Check server output
> terminal_session read vite_server
[Session 'vite_server' — running]
VITE v5.0.0 ready in 450 ms
➜ Local: http://localhost:5173/
# Send input to the process
> terminal_session send vite_server "rs\n" # Restart
# Stop when done
> terminal_session stop vite_server
Task Tracking
DevOrch can track progress on complex tasks:
╭─────────── Tasks (2/4) ───────────╮
│ ✓ Create project structure │
│ ✓ Set up dependencies │
│ ● Installing packages │
│ ○ Run initial build │
╰───────────────────────────────────╯
Use /tasks to view current task list anytime.
Config Files
DevOrch stores configuration in ~/.devorch/:
~/.devorch/
├── config.yaml # Provider settings and default models
├── permissions.yaml # Tool permission rules
└── sessions.db # SQLite database for chat history
Requirements
- Python 3.10+
- Dependencies:
- typer, rich, pydantic
- openai, anthropic, google-genai
- httpx, keyring, prompt_toolkit
- questionary, pyyaml, duckduckgo-search
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file devorch-0.1.2.tar.gz.
File metadata
- Download URL: devorch-0.1.2.tar.gz
- Upload date:
- Size: 74.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
178f86a0c0bfb59e305c728f780a859d6543c0be370f84fdbbb7ceec46537015
|
|
| MD5 |
382d3a8915b4701e0fcaf33c1deca170
|
|
| BLAKE2b-256 |
545c8416c8a874d554dfe8d0b28e900c8afc0aac84c714c692cd8bfca39afdf9
|
File details
Details for the file devorch-0.1.2-py3-none-any.whl.
File metadata
- Download URL: devorch-0.1.2-py3-none-any.whl
- Upload date:
- Size: 86.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
58754d69cbbd9e1e90d6fd7b15699de39a20b71cf95d4969dfc7f51db42d72fb
|
|
| MD5 |
bcc2946d254370dfcd346bb75498547e
|
|
| BLAKE2b-256 |
6818c0f0e1ce65cb0dc61aa580a5362d6f30ee8315be8b34568f947a7335ac55
|