A CLI tool for getting quick command-line suggestions using any LLM potentially available
Project description
Quick Question (qq)
A powerful, cross-platform CLI tool that generates and executes terminal commands using 100+ LLM providers through LiteLLM integration. It intelligently prioritizes local models for privacy and falls back to cloud providers when configured.
๐ Key Features
Universal LLM Support (100+ Providers via LiteLLM)
-
Local Providers (Privacy-first, no API keys):
- Ollama (port 11434) - Run open-source models locally
- LM Studio (port 1234) - GUI-based local model management
-
Major Cloud Providers:
- OpenAI (GPT-4o, GPT-5, ChatGPT models)
- Anthropic (Claude 3.5 Sonnet/Haiku/Opus)
- Google (Gemini, PaLM)
- Amazon Bedrock
- Azure OpenAI
- Groq (Fast inference)
- Grok (xAI)
-
Specialized Providers (via LiteLLM):
- Cohere, Replicate, Hugging Face
- Together AI, Anyscale, Perplexity
- DeepInfra, AI21, Voyage AI
- And 80+ more providers!
Intelligent Features
- โก Smart Provider Selection: Automatically detects and uses available providers
- ๐ฏ Model Optimization: Selects best models based on availability and performance
- ๐ Command History: Track and replay previous commands
- ๐จ Rich Interactive UI: Beautiful terminal interface with Textual TUI
- ๐ Clipboard Integration: Copy or type commands directly
- ๐ง Developer Mode: Extensible framework for custom actions
- ๐ Simple Mode: Streamlined one-shot command generation
- ๐พ Smart Caching: 1-hour TTL for providers and models
๐ฆ Installation
From PyPI (Stable)
pip install qq
From Test PyPI (Latest Features)
pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ qq2
From Source (Development)
git clone https://github.com/yourusername/quickquestion.git
cd quickquestion
pip install -e .
๐ฏ Quick Start
Basic Usage
# Get a command suggestion
qq "find all large files over 100MB"
# Simple mode - instant command (no UI)
qq --simple "kill process on port 8080"
# Type command directly to terminal
qq --simple-type "list docker containers"
Configuration
# Interactive settings (Rich UI)
qq --settings
# Advanced configuration (Textual TUI)
qq --config
# View command history
qq --history
# Developer mode
qq --dev
โ๏ธ Configuration Options
Interactive Settings (qq --settings)
Navigate with arrow keys through:
- Default Provider - Choose from available providers
- Default Model - Select model for chosen provider
- Command Action - Run or Copy commands
- Simple Mode - Enable/disable streamlined mode
- Simple Mode Action - Copy or Type behavior
Advanced Config (qq --config)
Beautiful Textual TUI with tabs:
- Quick Setup - Same as
--settingsbut in modern UI - Providers - Browse and configure 100+ providers
- Settings - General application settings
- About - Version and documentation
Settings are persisted in ~/.qq_settings.json
๐ Provider Setup
Local Providers (No API Key Required)
Ollama
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull llama2
# qq will auto-detect Ollama on port 11434
qq "your question"
LM Studio
- Download from lmstudio.ai
- Load any GGUF model
- Start local server (port 1234)
- qq auto-detects LM Studio
Cloud Providers
OpenAI
export OPENAI_API_KEY="sk-..."
qq "your question"
Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
qq "your question"
Other Providers
qq supports 100+ providers through LiteLLM. Set the appropriate environment variable:
export GROQ_API_KEY="..."
export XAI_API_KEY="..." # For Grok
export GEMINI_API_KEY="..."
export COHERE_API_KEY="..."
# etc.
๐จ Usage Examples
Command Generation
# File operations
qq "find files modified today"
qq "compress all images in current directory"
# System management
qq "show memory usage by process"
qq "find what's using port 3000"
# Git operations
qq "undo last commit keeping changes"
qq "show commits by author in last week"
# Docker/Kubernetes
qq "remove all stopped containers"
qq "get pod logs from last hour"
Simple Mode (No UI)
# Copy to clipboard
qq --simple-copy "create python virtual environment"
# โ Copied: python -m venv venv
# Type to terminal
qq --simple-type "activate virtual environment"
# source venv/bin/activate [appears in terminal]
Developer Mode
qq --dev
# Access specialized developer actions and workflows
๐ ๏ธ Advanced Features
Custom Developer Actions
Create ~/QuickQuestion/CustomDevActions/my_action.py:
from quickquestion.dev_actions.base import DevAction
class MyAction(DevAction):
@property
def name(self) -> str:
return "My Custom Action"
@property
def description(self) -> str:
return "Does something special"
def execute(self) -> bool:
self.console.print("[green]Executing...[/green]")
# Your logic here
return True
Performance Optimizations
- Async Provider Detection: Parallel checking for fastest startup
- Smart Caching: 1-hour TTL for providers, 30-second for other data
- Lazy Loading: Deferred initialization in simple mode
- Model Prioritization: Automatic selection of optimal models
Debugging
# Enable debug output
qq --debug "your question"
# Clear provider cache
qq --clear-cache
๐ File Locations
~/.qq_settings.json- User preferences~/.qq_history.json- Command history (last 100)~/.qq_cache.json- Provider and model cache~/QuickQuestion/CustomDevActions/- Custom actions
๐ง Troubleshooting
Provider Not Detected
# Clear cache and re-detect
qq --clear-cache
qq --settings # Reconfigure
API Key Issues
# Verify environment variable
echo $OPENAI_API_KEY
# Set in shell profile
echo 'export OPENAI_API_KEY="sk-..."' >> ~/.bashrc
SSL Certificate Errors
# macOS-specific fix
export CERT_PATH=$(python -m certifi)
export SSL_CERT_FILE="$CERT_PATH"
๐ข CI/CD & Deployment
GitHub Actions / Gitea Actions
The project includes automated workflows for:
- Testing on push/PR
- Publishing to PyPI on version tags
- Separate Test PyPI (qq2) and Production PyPI (qq) releases
Manual Deployment
# Build
python -m build
# Test locally
pip install dist/qq-*.whl
# Upload to PyPI
twine upload dist/*
๐ Architecture
quickquestion/
โโโ qq.py # Main entry point and CLI
โโโ llm_lite_provider.py # LiteLLM integration (100+ providers)
โโโ settings_manager.py # Configuration management
โโโ ui_library.py # Rich terminal UI components
โโโ cache.py # TTL-based caching system
โโโ provider_registry.py # Provider catalog and metadata
โโโ config_app.py # Textual TUI for configuration
โโโ dev_actions/ # Developer mode actions
๐ What's New in v0.2.0
- LiteLLM Integration: Support for 100+ LLM providers
- Provider Registry: Organized catalog of all providers
- Textual TUI: Modern configuration interface (
--config) - GPT-5 Support: Compatible with latest OpenAI models
- Enhanced Caching: Improved performance and reliability
- CI/CD Pipeline: Automated testing and deployment
- Bug Fixes: Provider persistence, model selection, and more
๐บ๏ธ Roadmap
- Web UI for configuration
- Plugin system for extensions
- Multi-command workflows
- Command explanation mode
- Integration with shell history
- Homebrew formula
- Docker image
- VSCode extension
๐ License
Proprietary - All rights reserved. See LICENSE file.
๐ฌ Support & Contact
- Bug Reports: GitHub Issues
- Feature Requests: qq@southbrucke.com
- General Support: support@southbrucke.com
- Author: Cristian Vyhmeister (cv@southbrucke.com)
๐ Acknowledgments
- Built with LiteLLM for universal LLM support
- UI powered by Rich and Textual
- Thanks to all contributors and users!
Quick Question - Your AI-powered command line companion ๐
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file qq-0.2.1.tar.gz.
File metadata
- Download URL: qq-0.2.1.tar.gz
- Upload date:
- Size: 62.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cf6703d0ba267d6f6ca4fc75cd0b58f86069a58f07e6fa92264a076c810bb4fa
|
|
| MD5 |
db4f28602cc1950d5cc4daefe76991fd
|
|
| BLAKE2b-256 |
5b58759a3c4d8e37a5e6bf6980df1306746d701b66dd90eed9d9bb8ac4373d78
|
File details
Details for the file qq-0.2.1-py3-none-any.whl.
File metadata
- Download URL: qq-0.2.1-py3-none-any.whl
- Upload date:
- Size: 73.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
82f1732d495d5fcc1ac6f2e253a055fa50485060a974edb8ea7ca52ec7e073d9
|
|
| MD5 |
04c55cd3bc941df6c9bc1906658e6c64
|
|
| BLAKE2b-256 |
963172f81d5c97aaee282ddcbbb3baf04759b74b787133da642aaf0f070acc1b
|