Skip to main content

CLI wrappers for Gemini, Claude, OpenAI and Qwen LLMs

Project description

ask-llm — LLM CLI Wrappers

A collection of lightweight Python wrappers for calling different LLM providers from the command line or from other scripts.

Providers

CLI Command Provider API Key env var Model env var
ask-gemini Google Gemini GEMINI_API_KEY GEMINI_MODEL
ask-claude Anthropic Claude ANTHROPIC_API_KEY CLAUDE_MODEL
ask-openai OpenAI OPENAI_API_KEY OPENAI_MODEL
ask-qwen Alibaba Qwen (DashScope) DASHSCOPE_API_KEY

Installation

pip install ask-llm-help

For development:

pip install -e .

Configuration

Copy the example env file and fill in your API keys:

cp .env.example .env
# Edit .env with your actual API keys

CLI Usage

ask-gemini "Explain photosynthesis" --system "You are a biology teacher"
ask-claude "Explain photosynthesis" --temp 0.3
ask-openai "Explain photosynthesis" --model gpt-4o
ask-qwen "Explain photosynthesis"

All CLI commands accept --system, --model, and --temp flags (except ask-qwen which uses positional args).

Import Usage

from ask_llm.ask_gemini import ask_gemini
from ask_llm.ask_claude import ask_claude
from ask_llm.ask_openai import ask_openai
from ask_llm.ask_qwen import ask_qwen_text  # non-streaming, returns str | None

result = ask_gemini("Summarise this text", system_instruction="Be concise.")
result = ask_qwen_text("Summarise this text", enable_thinking=True)  # thinking off by default

Shared function signature (Gemini, Claude, OpenAI)

ask_<provider>(
    prompt: str,
    system_instruction: str | None = None,
    model: str | None = None,
    temperature: float = 0.7,
    max_tokens: int = 40968192,
    json_mode: bool = False,       # not available on Claude
) -> str | None

Qwen — two functions

  • ask_qwen_text(...) — non-streaming, returns str | None. Use in scripts. enable_thinking defaults to False to avoid billing thinking tokens that are never returned.
  • ask_qwen(...) — streaming, prints to stdout. Use from CLI. enable_thinking defaults to True to show reasoning.

Notes

  • All wrappers return None on failure and log errors via the standard logging module.
  • HTTP requests have a timeout=60s (120s for Qwen streaming).
  • Gemini passes the API key as an x-goog-api-key header (not in the URL) for security.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ask_llm_help-0.3.2.tar.gz (9.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ask_llm_help-0.3.2-py3-none-any.whl (12.1 kB view details)

Uploaded Python 3

File details

Details for the file ask_llm_help-0.3.2.tar.gz.

File metadata

  • Download URL: ask_llm_help-0.3.2.tar.gz
  • Upload date:
  • Size: 9.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.5

File hashes

Hashes for ask_llm_help-0.3.2.tar.gz
Algorithm Hash digest
SHA256 cfd0e5747cbe7143c0eac35897cbadd098ca43a0e62adc4d0eb1d145a9383b77
MD5 e61275714090467a67d63390d78f64f9
BLAKE2b-256 358482f9b5d46f5abae24945a1392398df1a42fb62006cad0a40a16e8ef87c79

See more details on using hashes here.

File details

Details for the file ask_llm_help-0.3.2-py3-none-any.whl.

File metadata

  • Download URL: ask_llm_help-0.3.2-py3-none-any.whl
  • Upload date:
  • Size: 12.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.5

File hashes

Hashes for ask_llm_help-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 b7898db93d32370946211f2024376ece49ee7b7b3002894db3a6f55c33821831
MD5 64b32831d4dbd613812cd6c7292ee127
BLAKE2b-256 e7724c7cb3ca552212245c166112d1072ff6ca4124e3e82878d2a9c03552b8b7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page