Skip to main content

A command-line interface for various LLM providers (Claude, ChatGPT, DeepSeek)

Project description

DoQue - Command Line Interface for Multiple LLM Providers

A powerful command-line interface for interacting with various Large Language Model (LLM) providers including Claude (Anthropic), ChatGPT (OpenAI), and DeepSeek.

What is DoQue?

DoQue is a universal CLI tool that allows you to send queries to different AI providers while automatically including files, directories, and web content for context. It handles the complexity of file processing, validation, and formatting so you can focus on getting answers from AI about your code, documents, or any other content.

Key Capabilities

  • Multi-Provider Support: Switch between Claude, OpenAI, and DeepSeek with a simple flag
  • Smart File Processing: Automatically analyze single files, multiple files, or entire directory structures
  • Web Content Analysis: Fetch and analyze content from URLs, GitHub repositories, APIs, and code snippets
  • Intelligent Validation: Built-in limits and warnings to prevent excessive token usage and costs
  • Cross-Platform: Native Unicode support on Windows, macOS, and Linux

Installation

pip install doque

Quick Setup

Set your API key for at least one provider:

# Choose one or more
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export OPENAI_API_KEY="your-openai-api-key"
export DEEPSEEK_API_KEY="your-deepseek-api-key"

Basic Usage

Simple Questions

# No quotes needed for simple concepts
doq What is machine learning?
doq Explain recursion
doq Define polymorphism

File Analysis

# Single files
doq Review this code script.py
doq Explain this function utils.py
doq Debug this file error.log

# Multiple files
doq Compare these implementations old.py new.py
doq Analyze these modules auth.py user.py

Directory Analysis

# Current directory overview
doq Review project structure .

# Specific directory
doq Analyze data files ./data
doq Review source code ./src/*

# Recursive analysis with patterns
doq "Check all Python code" ./**/*.py

Choose Your AI Provider

# Claude (default) - Great for code analysis
doq Explain this algorithm algorithm.py

# OpenAI - Excellent for explanations
doq --llm=openai Debug this code buggy.py

# DeepSeek - Good for optimization
doq --llm=deepseek Optimize this function slow_function.py

Core Features

Request Validation

DoQue includes intelligent validation to prevent excessive costs:

Default Limits:

  • Maximum 10 files per request
  • Maximum 10,000 lines per text file
  • Maximum 10KB per binary file
  • Maximum 1MB total request size
  • Maximum 8 directory levels

Smart-kind Features:

  • Approximate token estimation with cost warnings (estimates may vary from actual usage)
  • Automatic detection of test files and redundant content
  • Interactive confirmation for large requests
  • Dry-run mode to preview before sending

Command Options

doq [OPTIONS] <query> [files...]

Available Options:

  • -h, --help: Show detailed help
  • -i: Interactive mode (confirm before sending)
  • --llm=PROVIDER: Choose provider (claude, openai, deepseek)
  • --dry-run: Preview request without sending
  • --doq-default-config: Generate configuration file

Essential Examples

# Get help
doq --help

# Preview large requests
doq --dry-run Review all code ./**

# Interactive mode for safety
doq -i Comprehensive analysis ./**

# Combine options
doq -i --llm=openai --dry-run Analyze project .

Configuration

Create ~/.doq-config.yaml to customize behavior:

# Default provider
default_provider: claude

# Validation limits
validation:
  max_files: 10
  max_text_lines: 10000
  max_binary_size_kb: 10
  max_total_size_mb: 1
  max_directory_depth: 8
  ignore_patterns:
    - "__pycache__"
    - ".git"
    - "node_modules"
    - "*.pyc"
    - "*.log"

# Cost control
cost_control:
  warn_token_threshold: 20000
  block_token_threshold: 50000
  show_cost_estimates: true

Generate default config:

doq --doq-default-config > ~/.doq-config.yaml

Best Practices

  1. Start Small: Begin with specific files before analyzing entire directories
  2. Use Dry Run: Preview large requests with --dry-run
  3. Be Specific: Ask focused questions rather than broad "analyze everything"
  4. Monitor Tokens: Pay attention to validation warnings
  5. Configure Limits: Adjust settings based on your usage patterns

Troubleshooting

Common Issues:

  • Unicode on Windows: Use PowerShell instead of Command Prompt
  • Large token usage: Use --dry-run to preview and reduce scope
  • Too many files: Use more specific patterns or increase limits
  • API errors: Verify environment variables are set correctly

Debug Information: Use --dry-run to see files included, validation results, and token estimates.

Advanced Usage

Advanced File Processing

# Single file deep analysis
doq Find potential bugs in this code security.py

# Multiple files with context
doq Compare these implementations v1/processor.py v2/processor.py

# Mixed files and directories
doq Analyze the entire authentication system auth/ user.py session.py

URL and Web Content Analysis

# Web pages
doq Summarize this article https://example.com/article.html

# GitHub files
doq Explain this implementation https://raw.githubusercontent.com/user/repo/main/file.py

# API endpoints
doq Analyze this API response https://api.github.com/users/octocat

# Code snippets
doq What does this script do https://gist.githubusercontent.com/user/id/raw/script.js

Directory Processing Patterns

DoQue supports flexible directory patterns:

Pattern Description Example
. Current directory (non-recursive) doq Review structure .
./** Current directory recursive doq Deep scan ./**
./src Specific subdirectory doq Review source ./src
./src/** Subdirectory recursive doq Deep source scan ./src/**
src/** Named directory recursive doq Analyze source src/**
# Current directory (non-recursive)
doq Review project architecture .

# Recursive scanning
doq Deep analysis of all code ./**

# Specific patterns
doq Review all Python files recursively ./**/*.py

# Named directory recursive
doq Deep scan of source directory ./src/**

# Multi-directory analysis
doq Review entire backend ./src/** ./api/** ./models/**

Provider-Specific Usage

# Claude for architecture analysis
doq --llm=claude Analyze the architecture of this system ./src/**

# OpenAI for documentation
doq --llm=openai Generate documentation for this API api.py

# DeepSeek for optimization
doq --llm=deepseek Find performance bottlenecks ./src/**

Interactive and Validation Modes

# Interactive mode
doq -i Comprehensive code review ./**

# Dry run preview
doq --dry-run Check what files will be included ./src/**

# Combined modes for safety
doq -i --dry-run --llm=claude Large project analysis ./**

International and Unicode Support

# Unicode queries (quotes required for special characters)
doq "Что такое машинное обучение?"              # Russian
doq "解释人工智能的基本概念"                     # Chinese
doq "اشرح مفهوم الذكاء الاصطناعي"               # Arabic

# Mixed language analysis
doq Analyze this multilingual codebase international_app/

Token Optimization Strategies

For large projects:

  1. Start Specific: Begin with individual files rather than entire directories
  2. Use Patterns: Target specific file types with ./**/*.py instead of ./**
  3. Preview First: Always use --dry-run for large requests
  4. Exclude Irrelevant: Configure ignore patterns for tests, logs, build files
  5. Interactive Mode: Use -i for requests with validation warnings
  6. Focus Queries: Ask specific questions rather than broad analysis requests

Environment Variables

# API Keys
export ANTHROPIC_API_KEY="your-anthropic-key"
export OPENAI_API_KEY="your-openai-key"
export DEEPSEEK_API_KEY="your-deepseek-key"

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests for new functionality
  5. Submit a pull request

Development

DoQue supports Python 3.10+ (tested on 3.10-3.14).

Setup:

# Create virtual environment
python3.14 -m venv venv314
source venv314/bin/activate  # Windows: venv314\Scripts\activate

# Install dependencies
pip install -r requirements.txt
pip install pytest pytest-asyncio

# Run tests (137 should pass)
pytest tests/ -v

# Install in editable mode
pip install -e .

License

MIT License - see LICENSE file for details

Support

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

doque-0.2.0.tar.gz (57.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

doque-0.2.0-py3-none-any.whl (34.6 kB view details)

Uploaded Python 3

File details

Details for the file doque-0.2.0.tar.gz.

File metadata

  • Download URL: doque-0.2.0.tar.gz
  • Upload date:
  • Size: 57.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for doque-0.2.0.tar.gz
Algorithm Hash digest
SHA256 8ce2e9a3e1f97cca1d4cd9c446d7f3fdb206baf679cd02b848afc661189ecf8e
MD5 510bad7ae45bb9394f7363144678fbea
BLAKE2b-256 c13bcd4dbe57e6c4a2562e61a896c4f97102a79565253c4a79b8e43ed7a9d65d

See more details on using hashes here.

File details

Details for the file doque-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: doque-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 34.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for doque-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0e4616f90e45950b701b97613475183b9cef793da780608a9ac32d69f124977d
MD5 d980d6ad6a40eea57a20f688b409fb67
BLAKE2b-256 e66d63490bf0f80803dd2d58d7958a3f6f520a92103d440f4d903a168f0bb4c2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page