Skip to main content

Unified configuration management for LLM applications.

Project description

pai-llm-config

PyPI version Python License: MIT GitHub stars

Unified configuration management for LLM applications.

One YAML file to manage all your LLM providers, models, API keys, and parameters. Works with OpenAI, Anthropic, Azure, LiteLLM, DSPy, LangChain, and more.

If this project helps you, please consider giving it a star. It helps others discover it too.

Features

  • Multi-provider — OpenAI, Anthropic, Azure, LiteLLM, and any OpenAI-compatible endpoint (DeepSeek, Gemini, Ollama, vLLM, etc.)
  • Two-layer adapters — L1 outputs plain dicts (zero extra deps), L2 returns real SDK clients with key rotation
  • Model aliases — Reference models by semantic names (smart, fast, cheap) instead of gpt-4o
  • Multi-key pool — Automatic key rotation with priority / round_robin / least_used / random strategies
  • Framework integration — One-step client creation for DSPy, LiteLLM; params output for LangChain, OpenAI SDK, etc.
  • Streaming — Built-in streaming wrappers with automatic usage reporting (OpenAI + Anthropic, sync + async)
  • Multi-environment — Profile-based config (dev / staging / prod) with inheritance
  • Type-safe — Pydantic validation, full IDE autocompletion

Install

pip install pai-llm-config

# With optional SDK support
pip install pai-llm-config[openai]       # OpenAI SDK
pip install pai-llm-config[anthropic]    # Anthropic SDK
pip install pai-llm-config[litellm]      # LiteLLM
pip install pai-llm-config[all]          # Everything

Quick Start

1. Create llm-config.yaml in your project root:

version: "1"
providers:
  openai:
    type: openai
    api_key: ${OPENAI_API_KEY}
models:
  gpt-4o:
    provider: openai
    model: gpt-4o
    temperature: 0.7
    max_tokens: 4096
aliases:
  smart: gpt-4o

2. Use it:

from pai_llm_config import config

# L2: One-line client creation with key rotation
client = config.openai_client("smart")
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

Usage

Config Loading

from pai_llm_config import LLMConfig, config

# Global singleton (recommended) — auto-discovers llm-config.yaml
model = config.get("smart")

# Or use LLMConfig directly
cfg = LLMConfig.default()          # Cached singleton
cfg = LLMConfig.load()             # Fresh instance
cfg = LLMConfig.load(profile="production", config_path="config/llm.yaml")

L1: Parameter Output (Zero Extra Dependencies)

from pai_llm_config import config

# OpenAI SDK format
params = config.params("smart")
# -> {"model": "gpt-4o", "api_key": "sk-xxx", "base_url": "https://...", "temperature": 0.7, ...}

# LiteLLM format (provider/model prefix + api_base)
params = config.litellm_params("smart")
# -> {"model": "openai/gpt-4o", "api_key": "sk-xxx", "api_base": "https://...", ...}

# DSPy format
params = config.dspy_params("smart")
# -> {"model": "openai/gpt-4o", "api_key": "sk-xxx", "api_base": "https://...", ...}

L2: SDK Client Factory

from pai_llm_config import config

# Type-safe client creation with built-in key rotation
client = config.openai_client("smart")              # -> openai.OpenAI
client = config.anthropic_client("reasoning")        # -> anthropic.Anthropic
client = config.async_openai_client("smart")         # -> openai.AsyncOpenAI
client = config.async_anthropic_client("reasoning")  # -> anthropic.AsyncAnthropic

# Auto-dispatch by provider type
client = config.create_client("smart")               # -> openai.OpenAI or anthropic.Anthropic

Framework Integration

from pai_llm_config import config

# DSPy — one step, returns configured dspy module
dspy = config.dspy_client("smart")
qa = dspy.ChainOfThought("question -> answer")
result = qa(question="What is pai-llm-config?")

# LiteLLM — returns litellm.Router
client = config.litellm_client("smart")
response = client.completion(model="smart", messages=[...])

# LangChain — use params() output
from langchain_openai import ChatOpenAI
chat = ChatOpenAI(**config.params("smart"))

Streaming

from pai_llm_config import config

# OpenAI streaming with automatic usage reporting
stream = config.stream_openai_chat("smart", messages=[{"role": "user", "content": "Tell a story"}])
for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="")

# Anthropic streaming
with config.stream_anthropic_chat("reasoning", messages=[...], max_tokens=1024) as stream:
    for text in stream.text_stream:
        print(text, end="")

# Auto-dispatch
stream = config.stream_chat("smart", messages=[...])

Multi-Key Rotation

providers:
  openai:
    type: openai
    api_keys:
      - key: ${OPENAI_KEY_1}
        alias: "primary"
        priority: 1
        daily_limit_usd: 5.0
      - key: ${OPENAI_KEY_2}
        alias: "secondary"
        priority: 2
        daily_limit_usd: 10.0
    key_strategy: priority  # priority | round_robin | least_used | random
# L2 clients automatically rotate keys — zero code changes
client = config.openai_client("smart")

# Monitor key pool health
pool = config.key_pool("openai")
print(pool.status())

Task Routing

routing:
  presets:
    code_generation: smart
    summarization: cheap
    classification: cheap
model = config.route("code_generation")  # -> ModelConfig for "smart"

Configuration Reference

See docs/02_config-spec.md for the full YAML specification, and docs/06_examples.md for more usage examples.

Contributing

Contributions are welcome! Feel free to open issues or submit pull requests.

If you find this project useful, please give it a star on GitHub — it motivates continued development and helps others find this project.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pai_llm_config-0.1.5.tar.gz (44.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pai_llm_config-0.1.5-py3-none-any.whl (23.9 kB view details)

Uploaded Python 3

File details

Details for the file pai_llm_config-0.1.5.tar.gz.

File metadata

  • Download URL: pai_llm_config-0.1.5.tar.gz
  • Upload date:
  • Size: 44.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for pai_llm_config-0.1.5.tar.gz
Algorithm Hash digest
SHA256 798fd84721db486d7ab542155e8eb8345b669a149bdd9645f3e70df07a74d7f2
MD5 41c866ceaa88cd471517444178569828
BLAKE2b-256 d2ce46fb7c128d7113d54c91a5521eacf99bd14d74f7f59c310de34e8ebc7ee8

See more details on using hashes here.

File details

Details for the file pai_llm_config-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: pai_llm_config-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 23.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for pai_llm_config-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 3d92c686a4d70408c3149b3ed63f443e43f80aba9a1ed4c28fc109dc6115e04b
MD5 7ed61c815e8544bb5ed031da88947241
BLAKE2b-256 34407c1e4f8c88b9fc6a816c92d8869563af50c1978d07e70390d8a1375ced00

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page