A simple LLM client factory utility for use across QX applications
Project description
qx-llms
A unified LLM client factory and model registry for use across QX applications. Provides a consistent interface for working with multiple LLM providers through both LangChain and OpenAI Agents SDK.
Installation
pip install qx-llms
Or install from source:
pip install -e .
Features
- Multi-provider support - OpenAI, Anthropic, Google Gemini, DeepSeek, and more
- Two factory interfaces - LangChain clients and OpenAI Agents SDK models
- Model registry - Centralized model definitions with capabilities and credit costs
- Fake models for testing - Mock implementations that don't make API calls
- Graceful fallbacks - Optional silent failure with dummy models
Quick Start
LangChain Factory
from qx_llms.factories.langchain_client_factory import get_llm_client, get_embeddings_client
# Get a chat model
llm = get_llm_client(model_name="gpt-4o", provider="openai")
response = llm.invoke("Hello, world!")
# Get an embeddings model
embeddings = get_embeddings_client(model_name="text-embedding-3-small", provider="openai")
vectors = embeddings.embed_query("Hello, world!")
OpenAI Agents Factory
from qx_llms.factories.openai_client_factory import get_openai_agents_model
# Get a model for use with OpenAI Agents SDK
model = get_openai_agents_model(provider="openai", model_name="gpt-4o")
Model Registry
from qx_llms.model_registry import (
get_llm_by_name,
get_llm_options,
get_llm_credit_mapping,
get_embedding_by_name,
get_embedding_options,
ModelProvider,
)
# Get a specific chat model's details
model = get_llm_by_name("gpt-4o")
print(model.name, model.provider, model.credits)
# Get all chat models with specific capabilities
options = get_llm_options(
providers=[ModelProvider.OPENAI, ModelProvider.ANTHROPIC],
structured_output=True,
tool_use=True,
)
# Get credit costs for billing
credits = get_llm_credit_mapping(credit_multiplier=2)
# Get a specific embedding model's details
embedding = get_embedding_by_name("text-embedding-3-large")
print(embedding.name, embedding.dimensions, embedding.multimodal)
# Get all embedding models with specific capabilities
embedding_options = get_embedding_options(
providers=[ModelProvider.OPENAI],
dimensions=1536,
)
Supported Providers
LangChain Factory
| Provider | Environment Variables | Description |
|---|---|---|
openai |
OPENAI_API_KEY |
OpenAI API |
azure_openai |
AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT |
Azure OpenAI |
anthropic |
ANTHROPIC_API_KEY |
Anthropic Claude |
gemini |
GEMINI_API_KEY |
Google Gemini |
deepseek |
DEEPSEEK_API_KEY |
DeepSeek |
openai_endpoint |
OPENAI_ENDPOINT_API_KEY, OPENAI_ENDPOINT_BASE_URL |
Custom OpenAI-compatible endpoint |
ollama |
OLLAMA_BASE_URL (optional) |
Local Ollama |
lmstudio |
LMSTUDIO_BASE_URL (optional) |
Local LM Studio |
fake |
None | Fake model for testing |
OpenAI Agents Factory
| Provider | Environment Variables | Description |
|---|---|---|
openai |
OPENAI_API_KEY |
OpenAI Responses API |
deepseek |
DEEPSEEK_API_KEY |
DeepSeek |
openrouter |
OPENROUTER_API_KEY |
OpenRouter |
gemini |
GEMINI_API_KEY |
Google Gemini |
anthropic |
ANTHROPIC_API_KEY |
Anthropic |
perplexity |
PERPLEXITY_API_KEY |
Perplexity |
huggingface |
HUGGINGFACE_API_KEY |
Hugging Face Inference |
local |
LOCAL_MODEL_URL |
Local models (Ollama, etc.) |
azure_openai |
AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT |
Azure OpenAI |
fake |
None | Fake model for testing |
Testing with Fake Models
Both factories provide fake models that return static responses without making API calls:
# LangChain fake model
from qx_llms.factories.langchain_client_factory import get_llm_client
fake_llm = get_llm_client(model_name="fake", provider="fake")
# OpenAI Agents fake model
from qx_llms.factories.openai_client_factory import get_openai_agents_model
fake_model = get_openai_agents_model(provider="fake")
response = await fake_model.get_response(...) # Returns "fake response"
Model Registry
The model registry provides a centralized definition of available models with their capabilities.
Chat Models
from qx_llms.model_registry import ChatModel, ModelProvider
# Each chat model has these attributes:
# - name: str - Model identifier (e.g., "gpt-4o")
# - provider: ModelProvider - Provider enum
# - structured_output: bool - Supports JSON schema output
# - tool_use: bool - Supports function calling
# - vision: bool - Supports image input
# - accepts_temperature: bool - Supports temperature parameter
# - credits: int - Base credit cost per request
Embedding Models
from qx_llms.model_registry import EmbeddingModel, ModelProvider
# Each embedding model has these attributes:
# - name: str - Model identifier (e.g., "text-embedding-3-large")
# - provider: ModelProvider - Provider enum
# - credits: int - Base credit cost per request
# - dimensions: int - Output vector size (768, 1536, 3072, etc.)
# - multimodal: bool - Can embed images (e.g., CLIP, OpenAI multimodal)
Filtering Chat Models
from qx_llms.model_registry import get_llm_options, ModelProvider
# Get only models that support structured output and vision
models = get_llm_options(
structured_output=True,
vision=True,
)
# Get models from specific providers
openai_models = get_llm_options(providers=[ModelProvider.OPENAI])
Filtering Embedding Models
from qx_llms.model_registry import get_embedding_options, ModelProvider
# Get only embedding models with specific dimensions
models = get_embedding_options(dimensions=1536)
# Get multimodal embedding models
multimodal_models = get_embedding_options(multimodal=True)
# Get embedding models from specific providers
openai_embeddings = get_embedding_options(providers=[ModelProvider.OPENAI])
Error Handling
Both factories support graceful fallbacks:
# Raises ValueError if provider is invalid or API key is missing
llm = get_llm_client(model_name="gpt-4o", provider="openai", fail_silently=False)
# Returns a dummy model instead of raising
llm = get_llm_client(model_name="gpt-4o", provider="openai", fail_silently=True)
Development
Running Tests
pip install -e ".[test]"
pytest tests/ -v
Requirements
- Python >= 3.12
- See
requirements.txtfor dependencies
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file qx_llms-0.1.7.tar.gz.
File metadata
- Download URL: qx_llms-0.1.7.tar.gz
- Upload date:
- Size: 13.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eb9cb49d6c0c664a049343092ee0feb9379d8baa27ebe52ea529869f47203c4e
|
|
| MD5 |
7f3951fc5290d1cd6c41c388e367833f
|
|
| BLAKE2b-256 |
57f8e659a63c90470965aa081fc6ea58deec98603b95bcc9841bd1e89436d917
|
File details
Details for the file qx_llms-0.1.7-py3-none-any.whl.
File metadata
- Download URL: qx_llms-0.1.7-py3-none-any.whl
- Upload date:
- Size: 10.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0af9436a21f77a4bc89b1d87b0cebc0ef5e83b97538ce9fd6f1428d9f0dabcd2
|
|
| MD5 |
082ba2888294feebe5085671e12b31f3
|
|
| BLAKE2b-256 |
0fbd2c11b1588849fa33b36306ee5f9771b7356862cec69511b3a87fb2526565
|