An advanced, highly customizable terminal-based chat application for interacting with LLMs
Project description
Kollabor
An advanced, highly customizable terminal-based chat application for interacting with Large Language Models (LLMs). Built with a powerful plugin system and comprehensive hook architecture for complete customization.
macOS: brew install kollaborai/tap/kollabor
Other: curl -sS https://raw.githubusercontent.com/kollaborai/kollabor-cli/main/install.sh | bash
Run: kollab
Features
- Event-Driven Architecture: Everything has hooks - every action triggers customizable hooks that plugins can attach to
- Advanced Plugin System: Dynamic plugin discovery and loading with comprehensive SDK
- Rich Terminal UI: Beautiful terminal rendering with status areas, visual effects, and modal overlays
- Conversation Management: Persistent conversation history with full logging support
- Model Context Protocol (MCP): Built-in support for MCP integration
- Tool Execution: Function calling and tool execution capabilities
- Pipe Mode: Non-interactive mode for scripting and automation
- Environment Variable Support: Complete configuration via environment variables (API settings, system prompts, etc.)
- Extensible Configuration: Flexible configuration system with plugin integration
- Async/Await Throughout: Modern Python async patterns for responsive performance
Installation
macOS (Recommended)
Standard Homebrew installation - what most macOS users expect:
brew install kollaborai/tap/kollabor
To upgrade:
brew upgrade kollabor
One-Line Install (Cross-Platform)
Auto-detects the best method (uvx > pipx > pip):
curl -sS https://raw.githubusercontent.com/kollaborai/kollabor-cli/main/install.sh | bash
Using uvx (Fastest, Isolated)
uvx runs the app in an isolated environment without installation:
uvx --from kollabor kollab
Or install to uv tool cache for instant startup:
uv tool install kollabor
kollab
Using pipx (Isolated, Clean)
Recommended for user-space installation without system conflicts:
pipx install kollabor
Using pip
Standard Python package installation:
pip install kollabor
From Source
git clone https://github.com/kollaborai/kollabor-cli.git
cd kollabor-cli
pip install -e .
Development Installation
pip install -e ".[dev]"
Quick Start
Interactive Mode
Simply run the CLI to start an interactive chat session:
kollab
Pipe Mode
Process a single query and exit:
# Direct query
kollab "What is the capital of France?"
# From stdin
echo "Explain quantum computing" | kollab -p
# From file
cat document.txt | kollab -p
# With custom timeout
kollab --timeout 5min "Complex analysis task"
Configuration
On first run, Kollabor creates a .kollabor-cli directory in your current working directory:
.kollabor-cli/
├── config.json # User configuration
├── system_prompt/ # System prompt templates
├── logs/ # Application logs
└── state.db # Persistent state
Configuration Options
The configuration system uses dot notation:
kollabor.llm.*- LLM service settingsterminal.*- Terminal rendering optionsapplication.*- Application metadata
Environment Variables
All configuration can be controlled via environment variables, which take precedence over config files:
API Configuration
Kollabor uses a profile-based configuration system. Each profile defines a provider, model, and connection settings. Environment variables follow the pattern KOLLABOR_{PROFILE_NAME}_{FIELD}.
Supported fields per profile: MODEL, PROVIDER, BASE_URL, API_KEY, MAX_TOKENS, TEMPERATURE, TIMEOUT, TOP_P, STREAMING, SUPPORTS_TOOLS, DESCRIPTION, EXTRA_HEADERS
Local LLM (Ollama, LM Studio, vLLM)
KOLLABOR_LOCAL_PROVIDER=custom
KOLLABOR_LOCAL_BASE_URL=http://localhost:11434/v1 # Ollama
KOLLABOR_LOCAL_MODEL=llama3.1
KOLLABOR_LOCAL_MAX_TOKENS=32768
KOLLABOR_LOCAL_TEMPERATURE=0.7
KOLLABOR_LMSTUDIO_PROVIDER=custom
KOLLABOR_LMSTUDIO_BASE_URL=http://localhost:1234/v1
KOLLABOR_LMSTUDIO_MODEL=qwen3-0.6b
OpenAI
# GPT-5.2: max output 128K tokens, 400K context
KOLLABOR_OPENAI_PROVIDER=openai
KOLLABOR_OPENAI_API_KEY=sk-proj-...
KOLLABOR_OPENAI_MODEL=gpt-5.2
KOLLABOR_OPENAI_MAX_TOKENS=128000
KOLLABOR_OPENAI_TEMPERATURE=0.7
# GPT-5 mini: max output 128K tokens (faster, cost-efficient)
KOLLABOR_OPENAI_PROVIDER=openai
KOLLABOR_OPENAI_API_KEY=sk-proj-...
KOLLABOR_OPENAI_MODEL=gpt-5-mini
KOLLABOR_OPENAI_MAX_TOKENS=128000
# o4-mini: max output 100K tokens (reasoning model)
KOLLABOR_OPENAI_PROVIDER=openai
KOLLABOR_OPENAI_API_KEY=sk-proj-...
KOLLABOR_OPENAI_MODEL=o4-mini
KOLLABOR_OPENAI_MAX_TOKENS=100000
Anthropic Claude
# Claude Sonnet 4.6: max output 64K tokens
KOLLABOR_CLAUDE_PROVIDER=anthropic
KOLLABOR_CLAUDE_API_KEY=sk-ant-...
KOLLABOR_CLAUDE_MODEL=claude-sonnet-4-6
KOLLABOR_CLAUDE_MAX_TOKENS=64000
# Claude Opus 4.6: max output 128K tokens
KOLLABOR_OPUS_PROVIDER=anthropic
KOLLABOR_OPUS_API_KEY=sk-ant-...
KOLLABOR_OPUS_MODEL=claude-opus-4-6
KOLLABOR_OPUS_MAX_TOKENS=128000
Azure OpenAI
KOLLABOR_AZURE_PROVIDER=azure_openai
KOLLABOR_AZURE_BASE_URL=https://myresource.openai.azure.com
KOLLABOR_AZURE_API_KEY=your-azure-key
KOLLABOR_AZURE_MODEL=gpt-5-mini
KOLLABOR_AZURE_MAX_TOKENS=128000
Google Gemini
# Gemini 3.1 Pro (preview): max output 64K tokens, 1M context
KOLLABOR_GEMINI_PROVIDER=gemini
KOLLABOR_GEMINI_API_KEY=your-gemini-key
KOLLABOR_GEMINI_MODEL=gemini-3.1-pro-preview
KOLLABOR_GEMINI_MAX_TOKENS=64000
# Gemini 3 Flash (preview): max output 65K tokens, 1M context
KOLLABOR_GEMINI_PROVIDER=gemini
KOLLABOR_GEMINI_API_KEY=your-gemini-key
KOLLABOR_GEMINI_MODEL=gemini-3-flash-preview
KOLLABOR_GEMINI_MAX_TOKENS=65536
xAI Grok
# Grok 4.1 Fast: 2M context, ~30K max output
KOLLABOR_GROK_PROVIDER=custom
KOLLABOR_GROK_BASE_URL=https://api.x.ai/v1
KOLLABOR_GROK_API_KEY=your-xai-key
KOLLABOR_GROK_MODEL=grok-4-1-fast-reasoning
KOLLABOR_GROK_MAX_TOKENS=30000
# Grok Code: 256K context, 10K max output, coding-optimized
KOLLABOR_GROKCODE_PROVIDER=custom
KOLLABOR_GROKCODE_BASE_URL=https://api.x.ai/v1
KOLLABOR_GROKCODE_API_KEY=your-xai-key
KOLLABOR_GROKCODE_MODEL=grok-code-fast-1
KOLLABOR_GROKCODE_MAX_TOKENS=10000
Z.AI (Zhipu / GLM)
# GLM-5: max output 131K tokens, 205K context
KOLLABOR_GLM_PROVIDER=custom
KOLLABOR_GLM_BASE_URL=https://api.z.ai/api/paas/v4
KOLLABOR_GLM_API_KEY=your-zai-key
KOLLABOR_GLM_MODEL=glm-5
KOLLABOR_GLM_MAX_TOKENS=131072
# GLM-4.7 Flash: max output 131K tokens, 203K context (fast & cheap)
KOLLABOR_GLMFAST_PROVIDER=custom
KOLLABOR_GLMFAST_BASE_URL=https://api.z.ai/api/paas/v4
KOLLABOR_GLMFAST_API_KEY=your-zai-key
KOLLABOR_GLMFAST_MODEL=glm-4.7-flash
KOLLABOR_GLMFAST_MAX_TOKENS=131072
# Z.AI Coding Plan ($3/mo): use with coding tools
KOLLABOR_GLMCODE_PROVIDER=custom
KOLLABOR_GLMCODE_BASE_URL=https://api.z.ai/api/coding/paas/v4
KOLLABOR_GLMCODE_API_KEY=your-zai-key
KOLLABOR_GLMCODE_MODEL=glm-5
KOLLABOR_GLMCODE_MAX_TOKENS=131072
Kimi (Moonshot AI)
# Kimi K2.5: max output 65K tokens, 262K context
KOLLABOR_KIMI_PROVIDER=custom
KOLLABOR_KIMI_BASE_URL=https://api.moonshot.ai/v1
KOLLABOR_KIMI_API_KEY=your-moonshot-key
KOLLABOR_KIMI_MODEL=kimi-k2.5
KOLLABOR_KIMI_MAX_TOKENS=65535
# Kimi Coding Plan: 256K context, coding-optimized
KOLLABOR_KIMICODE_PROVIDER=custom
KOLLABOR_KIMICODE_BASE_URL=https://api.kimi.com/coding/v1
KOLLABOR_KIMICODE_API_KEY=your-kimi-key
KOLLABOR_KIMICODE_MODEL=kimi-for-coding
KOLLABOR_KIMICODE_MAX_TOKENS=32768
OpenRouter (Multi-Provider Gateway)
OpenRouter provides access to 300+ models from all providers through a single API key. Model IDs use provider/model format.
# Any model via OpenRouter
KOLLABOR_OPENROUTER_PROVIDER=openrouter
KOLLABOR_OPENROUTER_API_KEY=sk-or-...
KOLLABOR_OPENROUTER_MODEL=anthropic/claude-opus-4.6
KOLLABOR_OPENROUTER_MAX_TOKENS=128000
# More OpenRouter model ID examples:
# openai/gpt-5.2 google/gemini-3-flash
# x-ai/grok-4.1-fast z-ai/glm-5
# moonshotai/kimi-k2.5 deepseek/deepseek-v3.2
Switching Profiles
kollab --profile claude # Use a specific profile
kollab --profile local --save # Auto-create profile from env vars and save
Or use the /profile command interactively to list, switch, and create profiles.
System Prompt Configuration
# Direct string (highest priority)
KOLLABOR_SYSTEM_PROMPT="You are a helpful coding assistant."
# Custom file path
KOLLABOR_SYSTEM_PROMPT_FILE="./my_custom_prompt.md"
Using .env Files
Create a .env file in your project root:
# Local LLM profile
KOLLABOR_LOCAL_PROVIDER=custom
KOLLABOR_LOCAL_BASE_URL=http://localhost:1234/v1
KOLLABOR_LOCAL_MODEL=qwen3-0.6b
# Cloud profile
KOLLABOR_CLAUDE_PROVIDER=anthropic
KOLLABOR_CLAUDE_API_KEY=sk-ant-your-key-here
KOLLABOR_CLAUDE_MODEL=claude-sonnet-4-20250514
# Custom system prompt
KOLLABOR_SYSTEM_PROMPT_FILE="./prompts/specialized.md"
Load and run:
export $(cat .env | xargs)
kollab --profile local # Use local LLM
kollab --profile claude # Use Claude
See ENV_VARS.md for complete documentation and examples.
Architecture
Kollabor follows a modular, event-driven architecture:
Core Components
- Application Core (
kollabor/application.py): Main orchestrator - Event System (
kollabor/events/): Central event bus with hook system - LLM Services (
kollabor/llm/): API communication, conversation management, tool execution - I/O System (
kollabor/io/): Terminal rendering, input handling, visual effects - Plugin System (
kollabor/plugins/): Dynamic plugin discovery and loading - Configuration (
kollabor/config/): Flexible configuration management - Storage (
kollabor/storage/): State management and persistence
Plugin Development
Create custom plugins by inheriting from base plugin classes:
from kollabor.plugins import BasePlugin
from kollabor.events import EventType
class MyPlugin(BasePlugin):
def register_hooks(self):
"""Register plugin hooks."""
self.event_bus.register_hook(
EventType.PRE_USER_INPUT,
self.on_user_input,
priority=HookPriority.NORMAL
)
async def on_user_input(self, context):
"""Process user input before it's sent to the LLM."""
# Your custom logic here
return context
def get_status_line(self):
"""Provide status information for the status bar."""
return "MyPlugin: Active"
Hook System
The comprehensive hook system allows plugins to intercept and modify behavior at every stage:
pre_user_input- Before processing user inputpre_api_request- Before API calls to LLMpost_api_response- After receiving LLM responsespre_message_display- Before displaying messagespost_message_display- After displaying messages- And many more...
Project Structure
kollabor/
├── kollabor/ # Core application modules
│ ├── application.py # Main orchestrator
│ ├── config/ # Configuration management
│ ├── events/ # Event bus and hooks
│ ├── io/ # Terminal I/O
│ ├── llm/ # LLM services
│ ├── plugins/ # Plugin system
│ └── storage/ # State management
├── plugins/ # Plugin implementations
├── docs/ # Documentation
├── tests/ # Test suite
└── main.py # Application entry point
Development
Running Tests
# All tests
python tests/run_tests.py
# Specific test file
python -m unittest tests.test_llm_plugin
# Individual test case
python -m unittest tests.test_llm_plugin.TestLLMPlugin.test_thinking_tags_removal
Code Quality
# Format code
python -m black kollabor/ plugins/ tests/ main.py
# Type checking
python -m mypy kollabor/ plugins/
# Linting
python -m flake8 kollabor/ plugins/ tests/ main.py --max-line-length=88
# Clean up cache files and build artifacts
python scripts/clean.py
Requirements
- Python 3.12 or higher
- aiohttp 3.8.0 or higher
License
MIT License - see LICENSE file for details
Contributing
Contributions are welcome! Please see the documentation for development guidelines.
Links
Acknowledgments
Built with modern Python async/await patterns and designed for extensibility and customization.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file kollabor-0.4.22.tar.gz.
File metadata
- Download URL: kollabor-0.4.22.tar.gz
- Upload date:
- Size: 1.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e78646c91f2376ee39d5b954d2eee1787f463ecef98dd33d6546f4f72da30de9
|
|
| MD5 |
6d8de0952885f7b3a5a4db6a412029d3
|
|
| BLAKE2b-256 |
6549f48cb39161214b336280ba16c0b99f3c4a7e8ad72bfaa070629d57fc6306
|
File details
Details for the file kollabor-0.4.22-py3-none-any.whl.
File metadata
- Download URL: kollabor-0.4.22-py3-none-any.whl
- Upload date:
- Size: 875.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0cb97b0c13d15c9e1321115eefe9fd4a7a2fb749c27de572dde937bb675bc6a8
|
|
| MD5 |
2a84aa4cb21cdf7449cdbb3c7f3c3b54
|
|
| BLAKE2b-256 |
05b084cd6c989362f32fe24c2496b6e6bc785adff75ac6289fdd2e3646739401
|