CLI wrappers for Gemini, Claude, OpenAI and Qwen LLMs
Project description
ask-llm — LLM CLI Wrappers
A collection of lightweight Python wrappers for calling different LLM providers from the command line or from other scripts.
Providers
| CLI Command | Provider | API Key env var | Model env var |
|---|---|---|---|
ask-gemini |
Google Gemini | GEMINI_API_KEY |
GEMINI_MODEL |
ask-claude |
Anthropic Claude | ANTHROPIC_API_KEY |
CLAUDE_MODEL |
ask-openai |
OpenAI | OPENAI_API_KEY |
OPENAI_MODEL |
ask-qwen |
Alibaba Qwen (DashScope) | DASHSCOPE_API_KEY |
— |
Installation
pip install ask-llm-help
For development:
pip install -e .
Configuration
Copy the example env file and fill in your API keys:
cp .env.example .env
# Edit .env with your actual API keys
CLI Usage
ask-gemini "Explain photosynthesis" --system "You are a biology teacher"
ask-claude "Explain photosynthesis" --temp 0.3
ask-openai "Explain photosynthesis" --model gpt-4o
ask-qwen "Explain photosynthesis"
All CLI commands accept --system, --model, and --temp flags (except ask-qwen which uses positional args).
Import Usage
from ask_llm.ask_gemini import ask_gemini
from ask_llm.ask_claude import ask_claude
from ask_llm.ask_openai import ask_openai
from ask_llm.ask_qwen import ask_qwen_text # non-streaming, returns str | None
result = ask_gemini("Summarise this text", system_instruction="Be concise.")
result = ask_qwen_text("Summarise this text", enable_thinking=True) # thinking off by default
Shared function signature (Gemini, Claude, OpenAI)
ask_<provider>(
prompt: str,
system_instruction: str | None = None,
model: str | None = None,
temperature: float = 0.7,
max_tokens: int = 4096–8192,
json_mode: bool = False, # not available on Claude
) -> str | None
Qwen — two functions
ask_qwen_text(...)— non-streaming, returnsstr | None. Use in scripts.enable_thinkingdefaults toFalseto avoid billing thinking tokens that are never returned.ask_qwen(...)— streaming, prints to stdout. Use from CLI.enable_thinkingdefaults toTrueto show reasoning.
Notes
- All wrappers return
Noneon failure and log errors via the standardloggingmodule. - HTTP requests have a
timeout=60s(120s for Qwen streaming). - Gemini passes the API key as an
x-goog-api-keyheader (not in the URL) for security.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ask_llm_help-0.3.1.tar.gz.
File metadata
- Download URL: ask_llm_help-0.3.1.tar.gz
- Upload date:
- Size: 9.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a92e4fd25ef06e149183ad6b7ef400734d43b76d59abcb20e9cffb2cada37fa7
|
|
| MD5 |
c0d400e8b1e5f4103532c051792a1b96
|
|
| BLAKE2b-256 |
f66b9bd76ebd334f04aea7d9bd817c2fa0ed1c00c3564962b486235150853cd5
|
File details
Details for the file ask_llm_help-0.3.1-py3-none-any.whl.
File metadata
- Download URL: ask_llm_help-0.3.1-py3-none-any.whl
- Upload date:
- Size: 12.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
42bb134cdcfe796ed8523fa9c26785c666fb59a0b906b50a7aacb4d670f3a293
|
|
| MD5 |
58f242bd3fe074faa9fa49f7a3856027
|
|
| BLAKE2b-256 |
f4002ec7cc415005361627db5f58adb92ab7d72802891d656c608d336f73e373
|