Skip to main content

CLI wrappers for Gemini, Claude, OpenAI and Qwen LLMs

Project description

ask-llm — LLM CLI Wrappers

A collection of lightweight Python wrappers for calling different LLM providers from the command line or from other scripts.

Providers

CLI Command Provider API Key env var Model env var
ask-gemini Google Gemini GEMINI_API_KEY GEMINI_MODEL
ask-claude Anthropic Claude ANTHROPIC_API_KEY CLAUDE_MODEL
ask-openai OpenAI OPENAI_API_KEY OPENAI_MODEL
ask-qwen Alibaba Qwen (DashScope) DASHSCOPE_API_KEY

Installation

pip install ask-llm-help

For development:

pip install -e .

Configuration

Copy the example env file and fill in your API keys:

cp .env.example .env
# Edit .env with your actual API keys

CLI Usage

ask-gemini "Explain photosynthesis" --system "You are a biology teacher"
ask-claude "Explain photosynthesis" --temp 0.3
ask-openai "Explain photosynthesis" --model gpt-4o
ask-qwen "Explain photosynthesis"

All CLI commands accept --system, --model, and --temp flags (except ask-qwen which uses positional args).

Import Usage

from ask_llm.ask_gemini import ask_gemini
from ask_llm.ask_claude import ask_claude
from ask_llm.ask_openai import ask_openai
from ask_llm.ask_qwen import ask_qwen_text  # non-streaming, returns str | None

result = ask_gemini("Summarise this text", system_instruction="Be concise.")
result = ask_qwen_text("Summarise this text", enable_thinking=True)  # thinking off by default

Shared function signature (Gemini, Claude, OpenAI)

ask_<provider>(
    prompt: str,
    system_instruction: str | None = None,
    model: str | None = None,
    temperature: float = 0.7,
    max_tokens: int = 40968192,
    json_mode: bool = False,       # not available on Claude
) -> str | None

Qwen — two functions

  • ask_qwen_text(...) — non-streaming, returns str | None. Use in scripts. enable_thinking defaults to False to avoid billing thinking tokens that are never returned.
  • ask_qwen(...) — streaming, prints to stdout. Use from CLI. enable_thinking defaults to True to show reasoning.

Notes

  • All wrappers return None on failure and log errors via the standard logging module.
  • HTTP requests have a timeout=60s (120s for Qwen streaming).
  • Gemini passes the API key as an x-goog-api-key header (not in the URL) for security.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ask_llm_help-0.3.1.tar.gz (9.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ask_llm_help-0.3.1-py3-none-any.whl (12.0 kB view details)

Uploaded Python 3

File details

Details for the file ask_llm_help-0.3.1.tar.gz.

File metadata

  • Download URL: ask_llm_help-0.3.1.tar.gz
  • Upload date:
  • Size: 9.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.5

File hashes

Hashes for ask_llm_help-0.3.1.tar.gz
Algorithm Hash digest
SHA256 a92e4fd25ef06e149183ad6b7ef400734d43b76d59abcb20e9cffb2cada37fa7
MD5 c0d400e8b1e5f4103532c051792a1b96
BLAKE2b-256 f66b9bd76ebd334f04aea7d9bd817c2fa0ed1c00c3564962b486235150853cd5

See more details on using hashes here.

File details

Details for the file ask_llm_help-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: ask_llm_help-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 12.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.5

File hashes

Hashes for ask_llm_help-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 42bb134cdcfe796ed8523fa9c26785c666fb59a0b906b50a7aacb4d670f3a293
MD5 58f242bd3fe074faa9fa49f7a3856027
BLAKE2b-256 f4002ec7cc415005361627db5f58adb92ab7d72802891d656c608d336f73e373

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page