Unified Python library for LLM APIs (OpenAI, Anthropic, Gemini, xAI, Groq, custom)
Project description
llm-blanket
Unified Python library for LLM APIs: OpenAI, Anthropic, Gemini, xAI (Grok), Groq, and custom OpenAI-compatible endpoints.
- Single interface: specify a model, get an LLM instance, call
invoke(messages). - Per-model provider: specify which provider backs each model via
model_provider; if not set, provider is inferred from model name (e.g.gpt-4o→ OpenAI,claude-*→ Anthropic). - Per-provider overrides: override base URL and API key per provider in config; if not set, default URLs and API keys from
.envare used.
Install
pip install llm-blanket
Optional provider dependencies (install only what you use):
pip install "llm-blanket[openai]" # OpenAI + Groq + xAI + custom (OpenAI-compatible)
pip install "llm-blanket[anthropic]" # Anthropic Claude
pip install "llm-blanket[gemini]" # Google Gemini
pip install "llm-blanket[all]" # All providers
Examples
Runnable scripts are in the examples/ directory:
- examples/quickstart.py – create an LLM and call
invoke()with a user message. - examples/streaming.py – stream tokens with
invoke_stream(). - examples/config_and_url_override.py –
LLMConfig,base_urls,base_url, and explicitprovider.
Run from the repo root (set the appropriate API key first):
OPENAI_API_KEY=sk-... python examples/quickstart.py
Quick start
from llm_blanket import get_llm, Message
# Provider inferred from model name
llm = get_llm("gpt-4o")
# Option 1: system and user as named arguments
resp = llm.invoke(system="You are helpful.", user="Hello!")
print(resp.content)
# Option 2: messages list (Message objects or OpenAI-style dicts)
resp = llm.invoke([Message("user", "Hi")])
resp = llm([{"role": "user", "content": "Hi"}])
# Option 3: common parameters (temperature, max_tokens, etc.) are passed through to the provider
resp = llm.invoke(user="Hello!", temperature=0.7, max_tokens=256)
# Streaming: same signature as invoke(), yields StreamChunk (content delta, optional finish_reason)
for chunk in llm.invoke_stream(user="Hello!", temperature=0.7):
print(chunk.content, end="", flush=True)
print()
Streaming
Use invoke_stream() with the same arguments as invoke(). It yields StreamChunk objects (.content is the text delta; .finish_reason is set on the final chunk when the provider supplies it):
from llm_blanket import get_llm
llm = get_llm("gpt-4o-mini")
for chunk in llm.invoke_stream(system="You are concise.", user="Count to 5."):
print(chunk.content, end="", flush=True)
if chunk.finish_reason:
print(f"\n[Done: {chunk.finish_reason}]")
Streaming is supported for OpenAI (and OpenAI-compatible), Anthropic, and Gemini.
Configuration
1. Per-model provider
Specify which provider backs each model with model_provider (model name → provider). If a model is not in the map, the provider is inferred from the model name (e.g. gpt-* → openai, claude-* → anthropic). Groq models like llama-3-70b-8192 have no unique prefix, so put them in the map or pass provider="groq" for that call.
from llm_blanket import get_llm, LLMConfig
config = LLMConfig(
model_provider={
"llama-3-70b-8192": "groq",
"mixtral-8x7b-32768": "groq",
"my-custom-model": "custom",
}
)
llm = get_llm("llama-3-70b-8192", config=config) # uses groq
llm = get_llm("gpt-4o", config=config) # inferred openai
2. Per-provider URL and API key
For each provider you can override the base URL and API key. If you don't, the library uses its default base URL for that provider and the API key from the environment.
Environment variables (default API keys):
| Provider | Environment variable |
|---|---|
| OpenAI | OPENAI_API_KEY |
| Anthropic | ANTHROPIC_API_KEY |
| Gemini | GOOGLE_API_KEY |
| xAI | XAI_API_KEY |
| Groq | GROQ_API_KEY |
| Custom | OPENAI_API_KEY (or set in config) |
Override in config:
config = LLMConfig(
model_provider={"llama-3-70b-8192": "groq"},
base_urls={
"openai": "https://my-openai-proxy.com/v1",
"groq": "https://api.groq.com/openai/v1",
},
api_keys={
"openai": "sk-openai-...",
"anthropic": "sk-ant-...",
},
)
openai_llm = get_llm("gpt-4o", config=config)
groq_llm = get_llm("llama-3-70b-8192", config=config)
anthropic_llm = get_llm("claude-3-5-sonnet-20241022", config=config)
Single-call overrides: provider, api_key, and base_url still override for that call only.
Supported models / providers
| Provider | Inferred from | Notes |
|---|---|---|
| OpenAI | gpt-*, o1-*, o3-* |
Default base: https://api.openai.com/v1 |
| Anthropic | claude-* |
Uses Anthropic Messages API |
| Gemini | gemini-* |
Uses Google GenAI SDK |
| xAI | grok*, grok-* |
OpenAI-compatible |
| Groq | Set in model_provider or provider="groq" |
Models like llama-3-70b-8192; OpenAI-compatible |
| Custom | Set in model_provider or provider="custom" and base_url |
Any OpenAI-compatible endpoint |
Extensibility
- Unified response:
invoke()returns anLLMResponsewithcontent,model,usage,finish_reason, and optionalraw(provider-specific object) andtool_calls. - Provider-specific options: Pass extra kwargs to
invoke()(e.g.temperature,max_tokens); they are forwarded to the underlying API. UseLLMConfig(extra={...})for client-level options. - Custom backends: Implement
BaseLLM(seellm_blanket.base) and register or construct your backend explicitly; the factory is focused on the built-in providers.
Example: multiple providers with per-model and per-provider config
from llm_blanket import get_llm, LLMConfig, Message
# Per-model provider + per-provider URL (and optionally api_keys; else from .env)
config = LLMConfig(
model_provider={"llama-3-70b-8192": "groq", "mixtral-8x7b-32768": "groq"},
base_urls={
"openai": "https://my-proxy.com/openai/v1",
"groq": "https://api.groq.com/openai/v1",
},
)
openai_llm = get_llm("gpt-4o-mini", config=config)
groq_llm = get_llm("llama-3-70b-8192", config=config)
for llm in [openai_llm, groq_llm]:
r = llm.invoke([Message("user", "Say hi in one word.")])
print(f"{llm.provider}: {r.content}")
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_blanket-0.1.3.tar.gz.
File metadata
- Download URL: llm_blanket-0.1.3.tar.gz
- Upload date:
- Size: 13.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fefedbe1e78758b7895a479047235f67ec636603458fc8d32dfd19b0328f871f
|
|
| MD5 |
b2ab0df548216aa5d421998091d8eef3
|
|
| BLAKE2b-256 |
b03d37c82a65b08555aa0510cdd4fe9d6c93c89e9c2fa2b1ba7f49f78d54b3e9
|
File details
Details for the file llm_blanket-0.1.3-py3-none-any.whl.
File metadata
- Download URL: llm_blanket-0.1.3-py3-none-any.whl
- Upload date:
- Size: 14.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bd7ed796935d11c949339ac1aee6c290054d2231891087cfab2af670077b8718
|
|
| MD5 |
25b7c917a8491688badc86185b7d512b
|
|
| BLAKE2b-256 |
81df28a1907d1c1ee5e4c413ceb47eed5a02f115591645dab15a6066260ecd50
|