Skip to main content

Unified Python library for LLM APIs (OpenAI, Anthropic, Gemini, xAI, Groq, custom)

Project description

llm-blanket

Unified Python library for LLM APIs: OpenAI, Anthropic, Gemini, xAI (Grok), Groq, and custom OpenAI-compatible endpoints.

  • Single interface: specify a model, get an LLM instance, call invoke(messages).
  • Provider inferred from model name (e.g. gpt-4o → OpenAI, claude-3-5-sonnet → Anthropic) or set explicitly.
  • Base URL overrides via config or base_url / base_urls for custom or proxy endpoints.
  • API keys from environment (LangChain/AutoGen-style) or passed in config.

Install

pip install llm-blanket

Optional provider dependencies (install only what you use):

pip install "llm-blanket[openai]"       # OpenAI + Groq + xAI + custom (OpenAI-compatible)
pip install "llm-blanket[anthropic]"    # Anthropic Claude
pip install "llm-blanket[gemini]"       # Google Gemini
pip install "llm-blanket[all]"          # All providers

Examples

Runnable scripts are in the examples/ directory:

Run from the repo root (set the appropriate API key first):

OPENAI_API_KEY=sk-... python examples/quickstart.py

Quick start

from llm_blanket import get_llm, Message

# Provider inferred from model name
llm = get_llm("gpt-4o")

# Option 1: system and user as named arguments
resp = llm.invoke(system="You are helpful.", user="Hello!")
print(resp.content)

# Option 2: messages list (Message objects or OpenAI-style dicts)
resp = llm.invoke([Message("user", "Hi")])
resp = llm([{"role": "user", "content": "Hi"}])

# Option 3: common parameters (temperature, max_tokens, etc.) are passed through to the provider
resp = llm.invoke(user="Hello!", temperature=0.7, max_tokens=256)

# Streaming: same signature as invoke(), yields StreamChunk (content delta, optional finish_reason)
for chunk in llm.invoke_stream(user="Hello!", temperature=0.7):
    print(chunk.content, end="", flush=True)
print()

Streaming

Use invoke_stream() with the same arguments as invoke(). It yields StreamChunk objects (.content is the text delta; .finish_reason is set on the final chunk when the provider supplies it):

from llm_blanket import get_llm

llm = get_llm("gpt-4o-mini")
for chunk in llm.invoke_stream(system="You are concise.", user="Count to 5."):
    print(chunk.content, end="", flush=True)
    if chunk.finish_reason:
        print(f"\n[Done: {chunk.finish_reason}]")

Streaming is supported for OpenAI (and OpenAI-compatible), Anthropic, and Gemini.

Configuration

API keys

By default, API keys are read from the environment. Use standard names so you can reuse .env or shell exports:

Provider Environment variable
OpenAI OPENAI_API_KEY
Anthropic ANTHROPIC_API_KEY
Gemini GOOGLE_API_KEY
xAI XAI_API_KEY
Groq GROQ_API_KEY
Custom OPENAI_API_KEY (or pass explicitly)

Override in code:

from llm_blanket import get_llm, LLMConfig

config = LLMConfig(api_key="sk-...")
llm = get_llm("gpt-4o", config=config)

# Or one-off
llm = get_llm("gpt-4o", api_key="sk-...")

Base URL and URL mapping

Override the base URL for a given client (e.g. custom or proxy):

# Single override for this client
llm = get_llm("gpt-4o", base_url="https://my-gateway.com/v1")

# Or via config with a mapping (e.g. per provider or per model)
config = LLMConfig(
    base_urls={
        "openai": "https://my-openai-proxy.com/v1",
        "gpt-4o": "https://special-endpoint.com/v1",
    }
)
llm = get_llm("gpt-4o", config=config)

Resolution order: base_url (direct) > base_urls[model] > base_urls[provider] > default URL for that provider.

Forcing provider

Use when the model name doesn’t indicate the provider (e.g. Groq’s llama-3-70b-8192):

llm = get_llm("llama-3-70b-8192", provider="groq")

Supported models / providers

Provider Inferred from Notes
OpenAI gpt-*, o1-*, o3-* Default base: https://api.openai.com/v1
Anthropic claude-* Uses Anthropic Messages API
Gemini gemini-* Uses Google GenAI SDK
xAI grok*, grok-* OpenAI-compatible
Groq Set provider="groq" Models like llama-3-70b-8192; OpenAI-compatible
Custom Set provider="custom" and base_url Any OpenAI-compatible endpoint

Extensibility

  • Unified response: invoke() returns an LLMResponse with content, model, usage, finish_reason, and optional raw (provider-specific object) and tool_calls.
  • Provider-specific options: Pass extra kwargs to invoke() (e.g. temperature, max_tokens); they are forwarded to the underlying API. Use LLMConfig(extra={...}) for client-level options.
  • Custom backends: Implement BaseLLM (see llm_blanket.base) and register or construct your backend explicitly; the factory is focused on the built-in providers.

Example: multiple providers and URL overrides

from llm_blanket import get_llm, LLMConfig, Message

# Shared URL mapping (e.g. from app config)
config = LLMConfig(
    base_urls={
        "openai": "https://my-proxy.com/openai/v1",
        "groq": "https://api.groq.com/openai/v1",
    }
)

openai_llm = get_llm("gpt-4o-mini", config=config)
groq_llm = get_llm("llama-3-70b-8192", config=config, provider="groq")

for llm in [openai_llm, groq_llm]:
    r = llm.invoke([Message("user", "Say hi in one word.")])
    print(f"{llm.provider}: {r.content}")

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_blanket-0.1.1.tar.gz (12.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_blanket-0.1.1-py3-none-any.whl (13.8 kB view details)

Uploaded Python 3

File details

Details for the file llm_blanket-0.1.1.tar.gz.

File metadata

  • Download URL: llm_blanket-0.1.1.tar.gz
  • Upload date:
  • Size: 12.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llm_blanket-0.1.1.tar.gz
Algorithm Hash digest
SHA256 c52677a1277de923e60c8b6b9e7b96a2707ea4fdfc160532b8f0a9fe911eabe0
MD5 2c254737ec837d255b346fe8e8087b0d
BLAKE2b-256 db36dbd2b9f3a742369d335671338a04a328b95717423852a8a9b47b7f91450b

See more details on using hashes here.

File details

Details for the file llm_blanket-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: llm_blanket-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 13.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llm_blanket-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f584d82e5cf7f8059e046946803691ba348549e6b73666b9774730386460f02c
MD5 cc27b799bde4ef964e32a9fd8e5dc5c1
BLAKE2b-256 7d79399d36c08ed56bc960ae45b0d80ee58462850e3392bd4efdff366a37e439

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page