LLM helpers for SRX services: ChatOpenAI wrapper, tool base, Tavily tool, OpenAI Batch API, and infrastructure-agnostic batch state management
Project description
srx-lib-llm
LLM helpers for SRX services built on LangChain.
What it includes:
responses_chat(prompt, cache=False): simple text chat via OpenAI Responses API- Tool strategy base and registry
- Tavily search tool strategy
- Structured output helpers: build Pydantic model from JSON Schema and generate structured outputs via LLM
- Request models, e.g.
DynamicStructuredOutputRequest - OpenAI Batch API service: comprehensive wrapper for asynchronous batch processing with 50% cost savings
Designed to work with official OpenAI by default, with opt-in support for OpenAI-compatible endpoints (e.g., Qwen) via environment configuration.
Install
PyPI (public):
pip install srx-lib-llm
uv (pyproject):
[project]
dependencies = ["srx-lib-llm>=0.1.0"]
Usage
from srx_lib_llm import responses_chat
text = await responses_chat("Hello there", cache=True)
OpenAI-Compatible Endpoints (Qwen Example)
Set a custom base URL and model to use OpenAI-compatible providers:
export OPENAI_API_KEY="sk-..."
export OPENAI_BASE_URL="https://d3hip1bjwcdu0p.cloudfront.net/api"
export OPENAI_MODEL="ds-news-aggregator"
export OPENAI_API_MODE="chat"
Then call existing helpers without code changes:
from srx_lib_llm import responses_chat
text = await responses_chat("Hello there", cache=True)
Structured output works the same way (assuming the provider supports OpenAI-compatible JSON schema output):
from srx_lib_llm import extract_structured
schema = {
"type": "object",
"properties": {"summary": {"type": "string"}},
"required": ["summary"]
}
result = await extract_structured(
text="Summarize this passage...", json_schema=schema, schema_name="Summary"
)
print(result.model_dump())
Notes:
extract_structured_gpt51requires OpenAI Responses API (GPT-5.* only).- The OpenAI Batch API helpers are OpenAI-specific and may not work elsewhere.
Structured output from JSON Schema:
from srx_lib_llm import StructuredOutputGenerator, build_model_from_schema, preprocess_json_schema
json_schema = {
"type": "object",
"properties": {
"title": {"type": "string"},
"score": {"type": "number"}
},
"required": ["title"]
}
gen = StructuredOutputGenerator()
model = build_model_from_schema("MyOutput", preprocess_json_schema(json_schema))
result = await gen.generate_from_model("Give me a title and score", model)
print(result.model_dump())
All-in-one extraction:
from srx_lib_llm import extract_structured
result = await extract_structured(
text="Analyze this text...", json_schema=my_schema, schema_name="MyOutput"
)
print(result.model_dump())
GPT-5.1 with medium reasoning effort (recommended for complex analysis):
from srx_lib_llm import extract_structured_gpt51
result = await extract_structured_gpt51(
text="Analyze this HR assessment document...",
json_schema=competency_schema,
schema_name="CompetencyAssessment",
model="gpt-5.1-2025-11-13",
reasoning_effort="medium",
system="You are an expert competency assessor..."
)
print(result.model_dump())
Back-compat helpers and request models:
from srx_lib_llm import create_dynamic_schema, DynamicStructuredOutputRequest
schema_model = create_dynamic_schema("MyOutput", json_schema)
payload = DynamicStructuredOutputRequest(text="...", json_schema=json_schema)
Tools:
from srx_lib_llm.tools import ToolStrategyBase, register_strategy, get_strategies
from srx_lib_llm.tools.tavily import TavilyToolStrategy
register_strategy(TavilyToolStrategy())
strategies = get_strategies()
Optional Langfuse Tracing
Set Langfuse environment variables to enable tracing for all LangChain and LangGraph flows. Without these values the library runs exactly as before.
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
# Optional, defaults to https://cloud.langfuse.com
LANGFUSE_BASE_URL=https://cloud.langfuse.com
When available, Langfuse's CallbackHandler is attached automatically to:
responses_chat- Structured output helpers
- LangGraph agents created through
ToolStrategyBase
OpenAI Batch API
Process large volumes of requests asynchronously with 50% cost savings.
Key Features:
- Supports CSV, JSONL, and NDJSON data files
- Smart prompt handling: row-level or global with variable interpolation
- Uses
OPENAI_MODELenv var for model selection - Automatically handles file format detection
- Track batch progress and retrieve results
Basic Usage with CSV Data:
from srx_lib_llm import OpenAIBatchService, BatchPayload, BatchEndpoint
# Your CSV file: data.csv
# name,age,city
# Alice,30,NYC
# Bob,25,SF
service = OpenAIBatchService()
# Create payload with global prompt
payload = BatchPayload(
prompt="Analyze this person: {name}, age {age}, from {city}. What can you infer?",
model=None, # Uses OPENAI_MODEL env var
endpoint=BatchEndpoint.CHAT_COMPLETIONS,
system_message="You are a data analyst.",
temperature=0.7
)
# Create batch from local file
mapping = await service.create_batch_from_file(
file_path="./data.csv",
payload=payload
)
# Or from URL
mapping = await service.create_batch_from_url(
url="https://example.com/data.csv",
payload=payload
)
print(f"Batch created: {mapping.batch_id}")
Row-Level Prompts (Prompt Column Wins):
# Your CSV file: custom_prompts.csv
# custom_id,prompt,context
# req-1,Summarize this: foo bar baz,important
# req-2,Translate to Spanish: hello world,casual
# Row-level 'prompt' column takes precedence over global prompt
payload = BatchPayload(
# No need for global prompt if data has 'prompt' column
model="gpt-4", # Override OPENAI_MODEL env var
endpoint=BatchEndpoint.CHAT_COMPLETIONS
)
mapping = await service.create_batch_from_file("./custom_prompts.csv", payload)
From In-Memory Data:
data = [
{"name": "Alice", "question": "What is AI?"},
{"name": "Bob", "question": "Explain quantum computing"},
]
payload = BatchPayload(
prompt="Answer {name}'s question: {question}",
custom_id_prefix="answer" # Generates answer-1, answer-2, etc.
)
mapping = await service.create_batch_from_data(data, payload)
Check Status and Get Results:
# Check batch status
info = await service.get_batch_status(mapping.batch_id)
print(f"Status: {info.status}")
print(f"Progress: {info.request_counts}")
# Wait for completion (optional)
info = await service.wait_for_completion(mapping.batch_id, poll_interval=60)
# Get results
results = await service.get_batch_results(mapping.batch_id)
for result in results:
if result.response:
print(f"{result.custom_id}: {result.response['body']}")
elif result.error:
print(f"{result.custom_id}: ERROR - {result.error}")
# Get errors separately (if any)
errors = await service.get_batch_errors(mapping.batch_id)
# Get batch mapping (tracks files)
mapping = service.get_mapping(mapping.batch_id)
print(f"Input: {mapping.input_path}")
print(f"Output: {mapping.output_path}")
Convenience Functions:
from srx_lib_llm import create_batch_from_url, create_batch_from_file, check_batch_status, BatchPayload
# Quick batch from URL
payload = BatchPayload(prompt="Analyze: {text}")
mapping = await create_batch_from_url("https://example.com/data.csv", payload)
# Quick batch from file
mapping = await create_batch_from_file("./data.jsonl", payload)
# Quick status check
info = await check_batch_status(mapping.batch_id)
Advanced Configuration:
payload = BatchPayload(
prompt="Process: {data}",
model="gpt-4-turbo", # Override env var
endpoint=BatchEndpoint.CHAT_COMPLETIONS,
system_message="You are an expert analyst.",
temperature=0.5,
max_tokens=1000,
top_p=0.9,
frequency_penalty=0.5,
presence_penalty=0.5,
custom_id_prefix="analysis",
extra_body_params={"response_format": {"type": "json_object"}} # Additional params
)
Azure OpenAI
Switch to Azure OpenAI by setting these env vars — zero code changes in your service:
AZURE_OPENAI_ENDPOINT=https://southeastasia.api.cognitive.microsoft.com/
AZURE_OPENAI_API_KEY=your-azure-key
AZURE_OPENAI_DEPLOYMENT_NAME=gpt-5-mini
AZURE_OPENAI_API_VERSION=2025-04-01-preview
When AZURE_OPENAI_ENDPOINT is set, the library automatically:
- Uses
AzureChatOpenAI/AzureOpenAIclients instead of direct OpenAI - Routes to the deployment specified in
AZURE_OPENAI_DEPLOYMENT_NAME - Uses Chat Completions API (Azure doesn't support Responses API)
- Ignores
OPENAI_API_KEYandOPENAI_MODEL
All existing consumer code works unchanged:
from srx_lib_llm import responses_chat, extract_structured
# Same calls — Azure is detected from env vars
text = await responses_chat("Hello there")
result = await extract_structured(text="...", json_schema=schema, schema_name="Out")
Provider detection for conditional logic:
from srx_lib_llm import is_azure_openai
if is_azure_openai():
print("Running on Azure OpenAI")
Limitations on Azure:
- Batch API is not supported (raises
NotImplementedError) - Responses API features (e.g.,
output_version) are automatically skipped
Environment Variables
Direct OpenAI (default):
OPENAI_API_KEY(required)OPENAI_MODEL(optional, default:gpt-4.1-nano)OPENAI_BASE_URL(optional, OpenAI-compatible providers)OPENAI_API_MODE(optional,responsesorchat; defaults tochatwhenOPENAI_BASE_URLis set)OPENAI_USE_RESPONSES_API(optional, overridesOPENAI_API_MODE)
Azure OpenAI:
AZURE_OPENAI_ENDPOINT(required — presence activates Azure mode)AZURE_OPENAI_API_KEY(required)AZURE_OPENAI_DEPLOYMENT_NAME(required)AZURE_OPENAI_API_VERSION(optional, default:2025-04-01-preview)
Other:
TAVILY_API_KEY(optional, for the Tavily tool)
Release
Tag vX.Y.Z to publish to GitHub Packages via Actions.
License
Proprietary © SRX
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file srx_lib_llm-1.17.0.tar.gz.
File metadata
- Download URL: srx_lib_llm-1.17.0.tar.gz
- Upload date:
- Size: 189.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c72e05bb6c7decd8d54b5c45872b8def458084d2228b95424c5136ac10e60344
|
|
| MD5 |
ebd91a41a07b8c5051c4c40a6dff8f5a
|
|
| BLAKE2b-256 |
e387e5b3de45ee871c41a2f132f69f696a345a02e1121bd623d23ccffeb87e27
|
File details
Details for the file srx_lib_llm-1.17.0-py3-none-any.whl.
File metadata
- Download URL: srx_lib_llm-1.17.0-py3-none-any.whl
- Upload date:
- Size: 59.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a8295b8a3ac7e9dd16563a8e6754c01f51a00c3bb62aca50498c502f0f981272
|
|
| MD5 |
3cc375caaf5aaed71c1010e214e8bf1f
|
|
| BLAKE2b-256 |
d23f8ea16fa33d7e9e38f7314a04587108aa12aa366e606f2492e93af1ced339
|