Skip to main content

A model-agnostic AI agent CLI - your AI henchman for the terminal

Project description

Henchman-AI

Your AI Henchman for the Terminal - A Model-Agnostic AI Agent CLI

PyPI version Python versions License: MIT

Henchman-AI is a powerful, terminal-based AI agent that supports multiple LLM providers (DeepSeek, OpenAI, Anthropic, Ollama, and more) through a unified interface. Inspired by gemini-cli, built for extensibility and production use.

โœจ Features

  • ๐Ÿค Multi-Agent Dev Team: Orchestrate a team of specialists (Architect, Coder, Reviewer, Tester, etc.) to solve complex engineering tasks.
  • ๐Ÿ”„ Model-Agnostic: Support any LLM provider through a unified abstraction layer
  • ๐Ÿ Pythonic: Leverages Python's async ecosystem and rich libraries for optimal performance
  • ๐Ÿ”Œ Extensible: Plugin system for tools, providers, and custom commands
  • ๐Ÿš€ Production-Ready: Proper error handling, comprehensive testing, and semantic versioning
  • ๐Ÿ› ๏ธ Tool Integration: Built-in support for file operations, web search, code execution, and more
  • โšก Fast & Efficient: Async-first design with intelligent caching and rate limiting
  • ๐Ÿ”’ Secure: Environment-based configuration and safe execution sandboxing

๐Ÿ“ฆ Installation

From PyPI (Recommended)

pip install henchman-ai

From Source

git clone https://github.com/MGPowerlytics/henchman-ai.git
cd henchman-ai
pip install -e ".[dev]"

With uv (Fastest)

uv pip install henchman-ai

๐Ÿš€ Quick Start

  1. Set your API key (choose your preferred provider):

    export DEEPSEEK_API_KEY="your-api-key-here"
    # or
    export OPENAI_API_KEY="your-api-key-here"
    # or
    export ANTHROPIC_API_KEY="your-api-key-here"
    
  2. Start the CLI:

    henchman
    
  3. Or run with a prompt directly:

    henchman --prompt "Explain this Python code" < example.py
    

๐Ÿ—๏ธ Architecture

Henchman-AI features a modular, component-based architecture designed for maintainability and extensibility. The core interactive REPL (Read-Eval-Print Loop) has been refactored into specialized components:

REPL Component Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                     REPL (Orchestrator)                      โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚  Input   โ”‚  โ”‚  Output   โ”‚  โ”‚   Command   โ”‚  โ”‚  Tool   โ”‚  โ”‚
โ”‚  โ”‚ Handler  โ”‚โ—„โ”€โ”ค  Handler  โ”‚โ—„โ”€โ”ค  Processor  โ”‚โ—„โ”€โ”คExecutor โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ”‚         โ”‚             โ”‚              โ”‚               โ”‚       โ”‚
โ”‚         โ–ผ             โ–ผ              โ–ผ               โ–ผ       โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚                Multi-Agent Orchestrator               โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Component Responsibilities

  1. REPL (Orchestrator): Main coordination class (406 lines, down from 559)

    • Initializes and connects all components
    • Manages the main interaction loop
    • Delegates work to specialized components
    • Maintains backward compatibility
  2. InputHandler: User input processing

    • Manages prompt sessions with history
    • Handles @file expansion and shell command detection
    • Processes keyboard interrupts and EOF
    • Validates and sanitizes user input
  3. OutputHandler: Console output and status display

    • Manages rich console output and formatting
    • Displays status bars and tool information
    • Shows welcome/goodbye messages
    • Handles event streaming and turn status
  4. CommandProcessor: Slash command execution

    • Processes /quit, /clear, /help, and other commands
    • Manages command registry and argument parsing
    • Delegates to specialized command handlers
    • Provides command completion and validation
  5. ToolExecutor: Tool execution and agent coordination

    • Executes tool calls from agents
    • Manages tool confirmation requests
    • Processes agent event streams
    • Handles tool iteration limits and cancellation

Benefits of Component Architecture

  • Single Responsibility: Each component has a clear, focused purpose
  • Testability: Components can be tested independently (100% test coverage for core components)
  • Maintainability: Smaller, focused classes are easier to understand and modify
  • Extensibility: New components can be added without modifying the REPL
  • Performance: Business logic moved out of REPL, leaving only orchestration

๐Ÿ“– Usage Examples

Basic Commands

# Show version
henchman --version

# Show help
henchman --help

# Interactive mode (default)
henchman

# Headless mode with prompt
henchman -p "Summarize the key points from README.md"

# Specify a provider
henchman --provider openai -p "Write a Python function to calculate fibonacci"

# Use a specific model
henchman --model gpt-4-turbo -p "Analyze this code for security issues"

File Operations

# Read and analyze a file
henchman -p "Review this code for bugs" < script.py

# Process multiple files
cat *.py | henchman -p "Find common patterns in these files"

# Generate documentation
henchman -p "Create API documentation for this module" < module.py > docs.md

โš™๏ธ Configuration

Henchman-AI uses hierarchical configuration (later settings override earlier ones):

  1. Default settings (built-in sensible defaults)
  2. User settings: ~/.henchman/settings.yaml
  3. Workspace settings: .henchman/settings.yaml (project-specific)
  4. Environment variables (highest priority)

Example settings.yaml

# Provider configuration
providers:
  default: deepseek  # or openai, anthropic, ollama
  deepseek:
    model: deepseek-chat
    base_url: "https://api.deepseek.com"
    temperature: 0.7
  openai:
    model: gpt-4-turbo-preview
    organization: "org-xxx"

# Tool settings
tools:
  auto_accept_read: true
  shell_timeout: 60
  web_search_max_results: 5

# UI settings
ui:
  theme: "monokai"
  show_tokens: true
  streaming: true

# System settings
system:
  cache_enabled: true
  cache_ttl: 3600
  max_tokens: 4096

Environment Variables

# Provider API keys
export DEEPSEEK_API_KEY="sk-xxx"
export OPENAI_API_KEY="sk-xxx"
export ANTHROPIC_API_KEY="sk-xxx"

# Configuration overrides
export HENCHMAN_DEFAULT_PROVIDER="openai"
export HENCHMAN_DEFAULT_MODEL="gpt-4"
export HENCHMAN_TEMPERATURE="0.5"

๐Ÿ”Œ Supported Providers

Provider Models Features
DeepSeek deepseek-chat, deepseek-coder Free tier, Code completion
OpenAI gpt-4, gpt-3.5-turbo, etc. Function calling, JSON mode
Anthropic claude-3-opus, claude-3-sonnet Long context, Constitutional AI
Ollama llama2, mistral, codellama Local models, Custom models
Custom Any OpenAI-compatible API Self-hosted, Local inference

๐Ÿ› ๏ธ Development

Setup Development Environment

# Clone and install
git clone https://github.com/MGPowerlytics/henchman-ai.git
cd henchman-ai
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -e ".[dev]"

Running Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=henchman --cov-report=html

# Run specific test categories
pytest tests/unit/ -v
pytest tests/integration/ -v

Code Quality

# Linting
ruff check src/ tests/
ruff format src/ tests/

# Type checking
mypy src/

# Security scanning
bandit -r src/

Building and Publishing

# Build package
hatch build

# Test build
hatch run test

# Publish to PyPI (requires credentials)
hatch publish

๐Ÿ“š Documentation

Online Documentation

For detailed documentation, see the docs directory in this repository:

Building Documentation Locally

You can build and view the documentation locally:

# Install documentation dependencies
pip install mkdocs mkdocs-material mkdocstrings[python]

# Build static HTML documentation
python scripts/build_docs.py

# Or serve documentation locally (live preview)
mkdocs serve

The documentation will be available at http://localhost:8000 when served locally.

๐Ÿค Contributing

We welcome contributions! Please see CONTRIBUTING.md for details.

๐Ÿ› Reporting Issues

Found a bug or have a feature request? Please open an issue on GitHub.

๐Ÿ“„ License

Henchman-AI is released under the MIT License. See the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • Inspired by gemini-cli
  • Built with Rich for beautiful terminal output
  • Uses Pydantic for data validation
  • Powered by the Python async ecosystem

Happy coding with your AI Henchman! ๐Ÿฆธโ€โ™‚๏ธ๐Ÿค–

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

henchman_ai-0.3.4.tar.gz (693.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

henchman_ai-0.3.4-py3-none-any.whl (235.1 kB view details)

Uploaded Python 3

File details

Details for the file henchman_ai-0.3.4.tar.gz.

File metadata

  • Download URL: henchman_ai-0.3.4.tar.gz
  • Upload date:
  • Size: 693.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for henchman_ai-0.3.4.tar.gz
Algorithm Hash digest
SHA256 51b62d677ab7b837bf598d350df29e218d923c9906c56a7a8d4863f50bc6c624
MD5 b50300cae34b2b6550f6052b626c95c1
BLAKE2b-256 b7ff6d5ec7492e35ae45c27917b64336412c2f4f600510b3222a8c4fb81c9ca7

See more details on using hashes here.

File details

Details for the file henchman_ai-0.3.4-py3-none-any.whl.

File metadata

  • Download URL: henchman_ai-0.3.4-py3-none-any.whl
  • Upload date:
  • Size: 235.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for henchman_ai-0.3.4-py3-none-any.whl
Algorithm Hash digest
SHA256 3014306ed82675a36a298a116d750b9b6e2622fc0e67f23616c0873fe251d4c0
MD5 a8212dae69dd6b7a72cf5670080aef1c
BLAKE2b-256 474cf24c39e46b3d96232c5c6872af5efffe6cf29c673ae3cad5620bbb46fa28

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page