AI-powered terminal assistant with rich TUI - brings ChatGPT/Claude to your command line
Project description
Consoul
A beautiful terminal-based AI chat interface built with Textual and LangChain
Consoul brings the power of modern AI assistants directly to your terminal with a rich, interactive TUI. Built on Textual's reactive framework and LangChain's provider abstraction, it offers a ChatGPT/Claude-like experience without leaving your command line.
๐ Quick Start
Installation
pip install consoul
export ANTHROPIC_API_KEY=your-key-here # Or OPENAI_API_KEY, GOOGLE_API_KEY
Minimal Example (5 lines)
from consoul import Consoul
console = Consoul()
print(console.chat("What is 2+2?"))
print(console.chat("What files are in the current directory?"))
Quick Customization (~15 lines)
from consoul import Consoul
# Customize as needed
console = Consoul(
model="gpt-4o", # Auto-detect provider
profile="default", # Use built-in profile
tools=True, # Enable bash execution with approval
temperature=0.7,
)
# Stateful conversation - history is maintained
console.chat("List all Python files in this directory")
console.chat("Show me the first one")
# Rich response with metadata
response = console.ask("Summarize this project", show_tokens=True)
print(f"\nResponse: {response.content}")
print(f"Tokens used: {response.tokens}")
print(f"Model: {response.model}")
# Introspection
print(f"\nSettings: {console.settings}")
print(f"Last cost: {console.last_cost}")
Terminal Interface
For the full interactive TUI:
consoul # Launch interactive mode
consoul chat "Explain quantum computing" # One-off question
consoul chat --model gpt-4o "Your question" # Use specific model
consoul --profile creative chat "Write a poem" # Use specific profile
โจ Features
- ๐จ Beautiful TUI - Rich, interactive terminal interface powered by Textual
- ๐ค Multi-Provider Support - OpenAI, Anthropic Claude, Google Gemini, Ollama, HuggingFace, LlamaCpp (GGUF)
- ๐ ๏ธ Tool Calling - AI-powered command execution with security controls
- โ๏ธ File Editing - AI-powered file manipulation with safety controls and progressive matching
- ๐ Code Search - AST-based semantic search across Python, TypeScript, Go, Rust, Java, C/C++
- ๐ธ Image Analysis - Multimodal vision support for analyzing screenshots, diagrams, and UI designs
- ๐ Conversation History - Save and resume conversations
- โ๏ธ Flexible Configuration - YAML-based profiles with environment overrides
- ๐ Security-First - Multi-layer approval system and audit logging
- ๐ Streaming Responses - Real-time token streaming
- ๐ฏ Profile System - Switch between different AI behaviors and settings
๐ง Tool Calling
Consoul includes a powerful tool calling system that lets AI models execute commands and interact with your system safely.
Security Features
- Risk Classification: Every tool is classified as SAFE, CAUTION, DANGEROUS, or BLOCKED
- Permission Policies: Choose from PARANOID, BALANCED, TRUSTING, or UNRESTRICTED
- User Approval: Interactive confirmation for dangerous operations
- Command Validation: Pattern-based blocking of dangerous commands
- Audit Logging: Complete execution history in JSONL format
- Whitelist/Blacklist: Fine-grained control over allowed commands
Quick Start
Enable tool calling in your configuration:
profiles:
default:
tools:
enabled: true
permission_policy: balanced # Recommended default
Start Consoul and ask the AI to run commands:
consoul
> "What files are in the current directory?"
The AI will request to use bash_execute, and you'll see an approval modal if required by your security policy.
Available Tools
- bash_execute - Execute bash commands with security controls, timeout enforcement, and output capture
Permission Policies
Choose your security posture:
| Policy | SAFE Commands | CAUTION Commands | DANGEROUS Commands | Use Case |
|---|---|---|---|---|
| PARANOID | โ ๏ธ Prompt | โ ๏ธ Prompt | โ ๏ธ Prompt | Production, maximum security |
| BALANCED โญ | โ Auto | โ ๏ธ Prompt | โ ๏ธ Prompt | Recommended default |
| TRUSTING | โ Auto | โ Auto | โ ๏ธ Prompt | Development, convenience |
| UNRESTRICTED | โ Auto | โ Auto | โ Auto | Testing only, DANGEROUS |
Legend: โ Auto-approve โ ๏ธ Require approval
Example Configuration
tools:
enabled: true
permission_policy: balanced
audit_logging: true
audit_log_file: ~/.consoul/tool_audit.jsonl
bash:
timeout: 60
whitelist_patterns:
- "git status"
- "git log"
- "ls"
- "pwd"
blocked_patterns:
- "^sudo\\s"
- "rm\\s+(-[rf]+\\s+)?/"
Custom Tools
Create custom tools with LangChain's @tool decorator:
from langchain_core.tools import tool
from consoul.ai.tools import ToolRegistry, RiskLevel
@tool
def get_weather(location: str) -> str:
"""Get current weather for a location."""
# Implementation here
return f"Weather in {location}: Sunny, 72ยฐF"
# Register with Consoul
registry = ToolRegistry(config=config.tools)
registry.register(get_weather, risk_level=RiskLevel.SAFE)
Documentation
- Complete Tool Calling Guide - Comprehensive documentation
- Configuration Examples - Pre-configured templates
- Custom Tool Development - Working code examples
Security Warning
โ ๏ธ Tool calling is powerful but potentially dangerous. Always:
- Review commands before approval
- Use appropriate permission policies for your environment
- Enable audit logging for accountability
- Never use UNRESTRICTED policy in production
- Keep whitelists minimal and specific
See the Security Policy for best practices.
๐ Code Search
Consoul includes powerful code search tools for semantic code analysis and navigation.
Available Search Tools
- grep_search - Fast text-based pattern matching (uses ripgrep)
- code_search - AST-based symbol search (find function/class definitions)
- find_references - Symbol usage finder (find all usages of a symbol)
Quick Start
Enable tools in your configuration:
profiles:
default:
tools:
enabled: true # Enables all search tools
Start Consoul and use natural language:
consoul
> "Find all TODO comments in Python files" # โ grep_search
> "Find the ToolRegistry class definition" # โ code_search
> "Find all usages of bash_execute in the project" # โ find_references
Programmatic Usage
from consoul import Consoul
console = Consoul(tools=True)
# Find function definitions
console.chat("Find the calculate_total function")
# Find all usages
console.chat("Find all places where calculate_total is called")
# Complex workflow
console.chat("""
First find the ShoppingCart class definition,
then find all places where it's instantiated
""")
Direct Tool Usage
from consoul.ai.tools import grep_search, code_search, find_references
# Fast text search
result = grep_search.invoke({
"pattern": "TODO",
"glob_pattern": "*.py"
})
# Find function definition
result = code_search.invoke({
"query": "calculate_total",
"symbol_type": "function"
})
# Find all usages
result = find_references.invoke({
"symbol": "bash_execute",
"scope": "project"
})
Tool Comparison
| When to Use | Tool | Why |
|---|---|---|
| Find text patterns, TODOs, comments | grep_search |
Fast text matching |
| Find where a function is defined | code_search |
Semantic definition search |
| Find all usages of a symbol | find_references |
Reference tracking |
| Search across any file type | grep_search |
Works on all text |
| Understand code structure | code_search |
AST-based understanding |
Language Support
| Language | grep_search | code_search | find_references |
|---|---|---|---|
| Python | โ | โ | โ |
| JavaScript/TypeScript | โ | โ | โ |
| Go | โ | โ | โ |
| Kotlin | โ | โ | โ |
| Java | โ | โ | โ |
| Rust | โ | โ | โ |
| C/C++ | โ | โ | โ |
Legend: โ Full support | โ No support
Note: find_references currently supports Python, JavaScript/TypeScript, Go, Kotlin, and Java. For other languages, use grep_search for text-based reference finding.
Performance
- grep_search: Very fast (<1s typical)
- code_search: Fast with cache (~2s first run, <1s cached)
- find_references: Medium (~3s first run, <1s cached)
Cache benefit: 5-10x speedup on repeated searches
Documentation
- Code Search Guide - Comprehensive usage guide
- Troubleshooting - Common issues and solutions
- Code Examples - Working Python examples
๐ธ Image Analysis
Analyze images with vision-capable AI models (Claude 3.5, GPT-4o, Gemini 2.0 Flash).
Supported Use Cases
- ๐ Debug screenshots - Analyze error messages and terminal output
- ๐จ UI/UX review - Get feedback on designs and mockups
- ๐ Diagram analysis - Understand architecture and flowcharts
- ๐ป Code extraction - Extract code from screenshots
- ๐ Visual comparison - Compare multiple images side-by-side
Quick Start
Method 1: Attach files using the ๐ button
- Click the ๐ attachment button in the TUI input area
- Select image files (PNG, JPEG, GIF, WebP)
- Type your question
- Press Enter
Method 2: Reference images in your message
consoul
> Explain the error in terminal_error.png
> Compare design_v1.png and design_v2.png
> Is this interface accessible? ui_mockup.png
Configuration
active_profile: vision
profiles:
vision:
provider: anthropic
model: claude-3-5-sonnet-20241022
tools:
image_analysis:
enabled: true
auto_detect_in_messages: true # Detect image paths automatically
max_image_size_mb: 5.0
max_images_per_query: 5
Supported Models
| Provider | Models |
|---|---|
| Anthropic | claude-3-5-sonnet-20241022, claude-3-opus-20240229 |
| OpenAI | gpt-4o, gpt-4o-mini |
gemini-2.0-flash, gemini-1.5-pro |
|
| Ollama | llava:latest (fully local, private) |
Programmatic Usage
from consoul import Consoul
# Initialize with vision model
consoul = Consoul(model="claude-3-5-sonnet-20241022")
# Analyze a screenshot
response = consoul.chat(
"What error is shown in this screenshot?",
image_paths=["terminal_error.png"]
)
# Compare multiple images
comparison = consoul.chat(
"Which design is better for mobile?",
image_paths=["design_a.png", "design_b.png"]
)
Documentation
- Image Analysis Guide - Complete feature documentation
- Configuration - Detailed config options
- Code Examples - Working Python examples
๐ Documentation
- Installation Guide
- Quick Start
- Configuration Reference
- Tool Calling Guide
- SDK Integration Guide - Embed Consoul in your application
- Development Guide
๐ SDK Integration
Consoul is designed as an SDK for embedding AI capabilities into your applications. Integrate tool calling without the TUI:
CLI Tools
from consoul.ai.tools import ToolRegistry, bash_execute
from consoul.ai.tools.providers import CliApprovalProvider
provider = CliApprovalProvider(verbose=True)
registry = ToolRegistry(config.tools, approval_provider=provider)
registry.register(bash_execute, risk_level=RiskLevel.CAUTION)
Web Applications
class WebApprovalProvider:
async def request_approval(self, request):
# Send to your web API
response = await http_client.post("/approve", json=request.to_dict())
return ToolApprovalResponse(**response.json())
Custom Audit Logging
class DatabaseAuditLogger:
async def log_event(self, event):
await db.execute("INSERT INTO audit_log VALUES (...)", event.to_dict())
Complete examples: See examples/sdk/ for working code
Full documentation: SDK Integration Guide
๐ API Keys
Consoul supports multiple AI providers. Set the appropriate environment variable:
# Anthropic Claude
export ANTHROPIC_API_KEY=your-key-here
# OpenAI
export OPENAI_API_KEY=your-key-here
# Google Gemini
export GOOGLE_API_KEY=your-key-here
# Ollama (no API key needed - runs locally)
# Just install from https://ollama.com
# LlamaCpp (no API key needed - runs GGUF models locally)
# Install: pip install llama-cpp-python
# macOS: CMAKE_ARGS="-DGGML_METAL=on" pip install llama-cpp-python
# HuggingFace (optional - for API access)
export HUGGINGFACEHUB_API_TOKEN=your-key-here
โ๏ธ Configuration
Create ~/.consoul/config.yaml:
profiles:
default:
model:
provider: anthropic
model: claude-3-5-sonnet-20241022
temperature: 0.7
max_tokens: 4096
conversation:
save_history: true
max_history: 50
tools:
enabled: true
permission_policy: balanced
# Local GGUF model with LlamaCpp (recommended for macOS)
local:
model:
provider: llamacpp
model: DavidAU/OpenAi-GPT-oss-20b-abliterated-uncensored-NEO-Imatrix-gguf
# model_path: /path/to/specific/model.gguf # Optional: specify exact file
n_ctx: 4096 # Context window size
n_gpu_layers: -1 # -1 = use all GPU layers (Metal on macOS)
temperature: 0.7
max_tokens: 512
See Configuration Guide for all options.
๐ค Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Acknowledgments
Built with:
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file consoul-0.1.0.tar.gz.
File metadata
- Download URL: consoul-0.1.0.tar.gz
- Upload date:
- Size: 302.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.3 CPython/3.12.3 Darwin/24.6.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b3043dd7638ddb742c24c60bfae9802774b54ce0ad3f8b48f183ae70d1f05b4b
|
|
| MD5 |
60deef94f69a9ff3e70f46f6cd7252ed
|
|
| BLAKE2b-256 |
6d5150ef44524c5c8df499c4d92363fc4277814bfc8295e8c28fa79834699f0a
|
File details
Details for the file consoul-0.1.0-py3-none-any.whl.
File metadata
- Download URL: consoul-0.1.0-py3-none-any.whl
- Upload date:
- Size: 363.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.3 CPython/3.12.3 Darwin/24.6.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
24eee633a99b7dc9504c5f9177ffb43d7bfafaf41ee471852ae18048d518509a
|
|
| MD5 |
e0bfada8537b9dc65736615b974b130b
|
|
| BLAKE2b-256 |
68f8bd295a5bb02931ba80d59d317cda6b6be40846c99812068e8bf2b221f1fa
|