Skip to main content

Shared LLM abstraction layer for Digital Duck projects

Project description

dd-llm

Shared LLM abstraction layer for Digital Duck projects.

Zero core dependencies. Adapters lazy-import their SDKs only when used.

Install

pip install -e .              # zero deps — claude_cli adapter works out of the box
pip install -e ".[openai]"    # + OpenAI SDK (also covers openrouter, ollama)
pip install -e ".[anthropic]" # + Anthropic SDK
pip install -e ".[gemini]"    # + Google GenAI SDK
pip install -e ".[all]"       # all provider SDKs

Quick Start

from dd_llm import call_llm

# Uses LLM_PROVIDER env var (default: "openai")
response = call_llm("What is 2+2?")

# Specify provider
response = call_llm("Hello", provider="claude_cli")

# With messages
response = call_llm(messages=[
    {"role": "system", "content": "You are helpful."},
    {"role": "user", "content": "Hi"},
])

Built-in Adapters

Name Class SDK Notes
claude_cli ClaudeCLIAdapter none (subprocess) Dev provider, $0 cost via Claude Code subscription
openai OpenAIAdapter openai Direct OpenAI API
anthropic AnthropicAdapter anthropic Direct Anthropic API
gemini GeminiAdapter google-genai Direct Google API
openrouter OpenAIAdapter (configured) openai OpenAI-compatible endpoint
ollama OpenAIAdapter (configured) openai Local OpenAI-compatible endpoint

Custom Adapters

from dd_llm import LLMAdapter, LLMResponse, register_adapter, call_llm

class MyAdapter(LLMAdapter):
    def call(self, prompt="", *, messages=None, **kwargs):
        result = my_internal_api(prompt)
        return LLMResponse(content=result, success=True, provider="my_api", model="v1")

register_adapter("my_api", MyAdapter)

# Now usable everywhere
response = call_llm("hello", provider="my_api")

UnifiedLLMProvider

Multi-provider client with retry (exponential backoff + jitter) and automatic fallback to alternative providers.

from dd_llm import UnifiedLLMProvider

provider = UnifiedLLMProvider(
    primary_provider="openai",
    fallback_providers=["anthropic", "ollama"],
    max_retries=3,
)

result = provider.call("Explain quantum computing")
if result.success:
    print(result.content)
    print(f"Provider: {result.provider}, Model: {result.model}")
    print(f"Tokens: {result.input_tokens} in, {result.output_tokens} out")

Environment Variables

Variable Description Default
LLM_PROVIDER Primary provider name openai
LLM_MODEL Default model (all providers) per-provider
LLM_MODEL_OPENAI Override model for OpenAI gpt-4o
LLM_MODEL_ANTHROPIC Override model for Anthropic claude-sonnet-4-5-20250929
LLM_MODEL_GEMINI Override model for Gemini gemini-2.0-flash
LLM_MAX_RETRIES Max retries per provider 3
LLM_INITIAL_WAIT Initial backoff (seconds) 1
LLM_MAX_WAIT Max backoff (seconds) 30
OPENAI_API_KEY OpenAI API key
ANTHROPIC_API_KEY Anthropic API key
GEMINI_API_KEY Google Gemini API key
OPENROUTER_API_KEY OpenRouter API key
OLLAMA_HOST Ollama base URL http://localhost:11434

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dd_llm-0.1.0.tar.gz (14.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dd_llm-0.1.0-py3-none-any.whl (14.6 kB view details)

Uploaded Python 3

File details

Details for the file dd_llm-0.1.0.tar.gz.

File metadata

  • Download URL: dd_llm-0.1.0.tar.gz
  • Upload date:
  • Size: 14.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.5

File hashes

Hashes for dd_llm-0.1.0.tar.gz
Algorithm Hash digest
SHA256 faa21440e79799d34d74225ce74b05b48b9f2ff75d593186470489ab4c50f75e
MD5 6080af4b57160a08c15dff78a21dfa6b
BLAKE2b-256 7e26049aa38fa16e8593f2a7f6392d3bde6902f62f6606c9c1eb7acc4b6b0fd0

See more details on using hashes here.

File details

Details for the file dd_llm-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: dd_llm-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 14.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.5

File hashes

Hashes for dd_llm-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 776b66bcb85cdaea29d54fed53602500b6b26f1abadf4c055c9eeb268cc1e385
MD5 adfde0e52ab00a248e5c046cea04efd6
BLAKE2b-256 9377b42f38179d9ec9d9a612cbbd0a2ab0019b72dd6caed2d37445c18e0d4520

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page