A model-agnostic AI agent CLI - your AI henchman for the terminal
Project description
Henchman-AI
Your AI Henchman for the Terminal - A Model-Agnostic AI Agent CLI
Henchman-AI is a powerful, terminal-based AI agent that supports multiple LLM providers (DeepSeek, OpenAI-compatible APIs, Anthropic, Ollama, OpenRouter, and more) through a unified interface. Inspired by gemini-cli, built for extensibility and production use.
Project Status: โ Complete - All 13 development phases implemented, including Multi-Agent Dev Team orchestration, MCP integration, and comprehensive tool system.
โจ Features
- ๐ค Multi-Agent Dev Team: Orchestrate a team of specialists (Tech Lead, Planner, Explorer, Engineer, Data Engineer) to solve complex engineering tasks.
- ๐ Model-Agnostic: Support any LLM provider through a unified abstraction layer
- ๐ Pythonic: Leverages Python's async ecosystem and rich libraries for optimal performance
- ๐ Extensible: Plugin system for tools, providers, and custom commands
- ๐ Production-Ready: Proper error handling, comprehensive testing with 100% coverage, and semantic versioning
- ๐ ๏ธ Built-in Tools: 34 tools for file operations, shell commands, web fetching, code analysis, testing, and multi-agent coordination
- ๐ MCP Integration: Connect to external tool servers via Model Context Protocol
- โก Fast & Efficient: Async-first design with intelligent caching and rate limiting
- ๐ Secure: Environment-based configuration and safe execution sandboxing
๐ฆ Installation
Prerequisites
- Python 3.10 or higher: Henchman-AI requires Python 3.10+
- API Key: For your chosen LLM provider (DeepSeek, OpenAI, Anthropic, etc.)
- Git (optional): For installing from source
- uv (optional): For faster dependency management
From PyPI (Recommended)
pip install henchman-ai
From Source
git clone https://github.com/MGPowerlytics/henchman-ai.git
cd henchman-ai
pip install -e ".[dev]"
With uv (Fastest)
uv pip install henchman-ai
๐ Quick Start
-
Set your API key (choose your preferred provider):
# DeepSeek (default provider) export DEEPSEEK_API_KEY="your-api-key-here" # OpenAI or OpenAI-compatible services export OPENAI_API_KEY="your-api-key-here" # Anthropic Claude export ANTHROPIC_API_KEY="your-api-key-here" # OpenRouter export OPENROUTER_API_KEY="your-api-key-here" # For other providers, see the Providers documentation
-
Start the CLI:
henchman
-
Or run with a prompt directly:
henchman --prompt "Explain this Python code" < example.py
๐๏ธ Architecture
Henchman-AI features a modular, component-based architecture designed for maintainability and extensibility. The core interactive REPL (Read-Eval-Print Loop) has been refactored into specialized components:
REPL Component Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ REPL (Orchestrator) โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโ โ
โ โ Input โ โ Output โ โ Command โ โ Tool โ โ
โ โ Handler โโโโค Handler โโโโค Processor โโโโคExecutor โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโ โ
โ โ โ โ โ โ
โ โผ โผ โผ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Multi-Agent Orchestrator โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Component Responsibilities
-
REPL (Orchestrator): Main coordination class
- Initializes and connects all components
- Manages the main interaction loop
- Delegates work to specialized components
- Maintains backward compatibility
-
InputHandler: User input processing
- Manages prompt sessions with history
- Handles @file expansion and shell command detection
- Processes keyboard interrupts and EOF
- Validates and sanitizes user input
-
OutputHandler: Console output and status display
- Manages rich console output and formatting
- Displays status bars and tool information
- Shows welcome/goodbye messages
- Handles event streaming and turn status
-
CommandProcessor: Slash command execution
- Processes /quit, /clear, /help, and other commands
- Manages command registry and argument parsing
- Delegates to specialized command handlers
- Provides command completion and validation
-
ToolExecutor: Tool execution and agent coordination
- Executes tool calls from agents
- Manages tool confirmation requests
- Processes agent event streams
- Handles tool iteration limits and cancellation
Benefits of Component Architecture
- Single Responsibility: Each component has a clear, focused purpose
- Testability: Components can be tested independently (100% test coverage for core components)
- Maintainability: Smaller, focused classes are easier to understand and modify
- Extensibility: New components can be added without modifying the REPL
- Performance: Business logic moved out of REPL, leaving only orchestration
๐ Usage Examples
Basic Commands
# Show version
henchman --version
# Show help
henchman --help
# Interactive mode (default)
henchman
# Headless mode with prompt
henchman -p "Summarize the key points from README.md"
# Specify a provider
henchman --provider openai_compat -p "Write a Python function to calculate fibonacci"
# Use a specific model
henchman --model gpt-4-turbo -p "Analyze this code for security issues"
File Operations
# Read and analyze a file
henchman -p "Review this code for bugs" < script.py
# Process multiple files
cat *.py | henchman -p "Find common patterns in these files"
# Generate documentation
henchman -p "Create API documentation for this module" < module.py > docs.md
โ๏ธ Configuration
Henchman-AI uses hierarchical configuration (later settings override earlier ones):
- Default settings (built-in sensible defaults)
- User settings:
~/.henchman/settings.yaml - Workspace settings:
.henchman/settings.yaml(project-specific) - Environment variables (highest priority)
Example settings.yaml
# Provider configuration
providers:
default: deepseek # or openai_compat, anthropic, ollama, openrouter
deepseek:
model: deepseek-chat
base_url: "https://api.deepseek.com"
temperature: 0.7
openai_compat:
model: gpt-4-turbo-preview
organization: "org-xxx"
# Tool settings
tools:
auto_accept_read: true
shell_timeout: 60
web_search_max_results: 5
# UI settings
ui:
theme: "monokai"
show_tokens: true
streaming: true
# System settings
system:
cache_enabled: true
cache_ttl: 3600
max_tokens: 4096
Environment Variables
# Provider API keys
export DEEPSEEK_API_KEY="sk-xxx"
export OPENAI_API_KEY="sk-xxx"
export ANTHROPIC_API_KEY="sk-xxx"
export OPENROUTER_API_KEY="sk-xxx"
export TOGETHER_API_KEY="sk-xxx"
export GROQ_API_KEY="sk-xxx"
export FIREWORKS_API_KEY="sk-xxx"
# Configuration overrides
export HENCHMAN_DEFAULT_PROVIDER="openai_compat"
export HENCHMAN_DEFAULT_MODEL="gpt-4"
export HENCHMAN_TEMPERATURE="0.5"
๐ Supported Providers
| Provider | Status | Configuration Name | Notes |
|---|---|---|---|
DeepSeek (deepseek) |
โ | deepseek |
Default provider, OpenAI-compatible API |
Anthropic (anthropic) |
โ | anthropic |
Claude models (native SDK) |
Ollama (ollama) |
โ | ollama |
Local models, OpenAI-compatible API |
OpenRouter (openrouter) |
โ | openrouter |
Access to hundreds of models, OpenAI-compatible API |
OpenAI-Compatible (openai_compat) |
โ | openai_compat |
Generic provider for ANY OpenAI-compatible API (OpenAI, Together, Groq, Fireworks, etc.) |
Together AI (together) |
โ | together |
Alias for openai_compat with Together base URL |
Groq (groq) |
โ | groq |
Alias for openai_compat with Groq base URL |
Fireworks AI (fireworks) |
โ | fireworks |
Alias for openai_compat with Fireworks base URL |
Note: The openai_compat provider is a generic provider for any OpenAI-compatible API. Specific providers like together, groq, and fireworks are aliases for openai_compat that require you to configure the appropriate base URL in your settings.
Configuration Names: Use these names in your settings.yaml file (e.g., providers.default: deepseek) or with the --provider CLI flag.
See Providers for complete details and configuration.
๐ ๏ธ Development
Setup Development Environment
# Clone and install
git clone https://github.com/MGPowerlytics/henchman-ai.git
cd henchman-ai
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -e ".[dev]"
For AI Agent Development: Henchman-AI includes comprehensive copilot instructions for AI agents working on the codebase. See .github/copilot-instructions.md for architecture overview, development workflows, performance optimization strategies, and agent effectiveness guidelines.
Running Tests
# Run all tests
pytest
# Run with coverage (100% required)
pytest --cov=henchman --cov-report=html --cov-fail-under=100
# Run specific test categories
pytest tests/unit/ -v
pytest tests/integration/ -v
Code Quality
# Linting
ruff check src/ tests/
ruff format src/ tests/
# Type checking
mypy src/
# Security scanning
bandit -r src/
Building and Publishing
# Build package
hatch build
# Test build
hatch run test
# Publish to PyPI (requires credentials)
hatch publish
๐ Documentation
Online Documentation
For detailed documentation, see the docs directory in this repository:
- Getting Started
- Configuration Guide
- API Reference
- Tool Development
- Provider Integration
- MCP Integration
- Extensions
Building Documentation Locally
You can build and view the documentation locally:
# Install documentation dependencies
pip install mkdocs mkdocs-material mkdocstrings[python]
# Build static HTML documentation
python scripts/build_docs.py
# Or serve documentation locally (live preview)
mkdocs serve
The documentation will be available at http://localhost:8000 when served locally.
๐ค Contributing
We welcome contributions! Please see CONTRIBUTING.md for details.
๐ Reporting Issues
Found a bug or have a feature request? Please open an issue on GitHub.
๐ License
Henchman-AI is released under the MIT License. See the LICENSE file for details.
๐ Acknowledgments
- Inspired by gemini-cli
- Built with Rich for beautiful terminal output
- Uses Pydantic for data validation
- Powered by the Python async ecosystem
Happy coding with your AI Henchman! ๐ฆธโโ๏ธ๐ค
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file henchman_ai-0.3.13.tar.gz.
File metadata
- Download URL: henchman_ai-0.3.13.tar.gz
- Upload date:
- Size: 16.2 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6a9edf26169b9297fb639376df92baffd897f298f822dc31bd4499766d6dc522
|
|
| MD5 |
963525009c22c2d9c59012059145db8f
|
|
| BLAKE2b-256 |
7c0636379ba0c9398e8a456344d3181b10034e39ed9f198a94753929eb00afe0
|
File details
Details for the file henchman_ai-0.3.13-py3-none-any.whl.
File metadata
- Download URL: henchman_ai-0.3.13-py3-none-any.whl
- Upload date:
- Size: 624.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b3c9810c048bf85befdd4b6b5fc0f292227c72c52c6f976ac1ca865da0340be8
|
|
| MD5 |
ba467c265264ee63fbe2250ca003415b
|
|
| BLAKE2b-256 |
c41e67c889e34953df150e03fbb54dff162a9e8097385ee51ec4c6c3f3a96b82
|