Skip to main content

LLM helpers for SRX services: ChatOpenAI wrapper, tool base, Tavily tool, OpenAI Batch API, and infrastructure-agnostic batch state management

Project description

srx-lib-llm

LLM helpers for SRX services built on LangChain.

What it includes:

  • responses_chat(prompt, cache=False): simple text chat via OpenAI Responses API
  • Tool strategy base and registry
  • Tavily search tool strategy
  • Structured output helpers: build Pydantic model from JSON Schema and generate structured outputs via LLM
  • Request models, e.g. DynamicStructuredOutputRequest
  • OpenAI Batch API service: comprehensive wrapper for asynchronous batch processing with 50% cost savings

Designed to work with official OpenAI only.

Install

PyPI (public):

  • pip install srx-lib-llm

uv (pyproject):

[project]
dependencies = ["srx-lib-llm>=0.1.0"]

Usage

from srx_lib_llm import responses_chat
text = await responses_chat("Hello there", cache=True)

Structured output from JSON Schema:

from srx_lib_llm import StructuredOutputGenerator, build_model_from_schema, preprocess_json_schema

json_schema = {
  "type": "object",
  "properties": {
    "title": {"type": "string"},
    "score": {"type": "number"}
  },
  "required": ["title"]
}

gen = StructuredOutputGenerator()
model = build_model_from_schema("MyOutput", preprocess_json_schema(json_schema))
result = await gen.generate_from_model("Give me a title and score", model)
print(result.model_dump())

All-in-one extraction:

from srx_lib_llm import extract_structured

result = await extract_structured(
    text="Analyze this text...", json_schema=my_schema, schema_name="MyOutput"
)
print(result.model_dump())

Back-compat helpers and request models:

from srx_lib_llm import create_dynamic_schema, DynamicStructuredOutputRequest

schema_model = create_dynamic_schema("MyOutput", json_schema)
payload = DynamicStructuredOutputRequest(text="...", json_schema=json_schema)

Tools:

from srx_lib_llm.tools import ToolStrategyBase, register_strategy, get_strategies
from srx_lib_llm.tools.tavily import TavilyToolStrategy

register_strategy(TavilyToolStrategy())
strategies = get_strategies()

Optional Langfuse Tracing

Set Langfuse environment variables to enable tracing for all LangChain and LangGraph flows. Without these values the library runs exactly as before.

LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
# Optional, defaults to https://cloud.langfuse.com
LANGFUSE_BASE_URL=https://cloud.langfuse.com

When available, Langfuse's CallbackHandler is attached automatically to:

  • responses_chat
  • Structured output helpers
  • LangGraph agents created through ToolStrategyBase

OpenAI Batch API

Process large volumes of requests asynchronously with 50% cost savings.

Key Features:

  • Supports CSV, JSONL, and NDJSON data files
  • Smart prompt handling: row-level or global with variable interpolation
  • Uses OPENAI_MODEL env var for model selection
  • Automatically handles file format detection
  • Track batch progress and retrieve results

Basic Usage with CSV Data:

from srx_lib_llm import OpenAIBatchService, BatchPayload, BatchEndpoint

# Your CSV file: data.csv
# name,age,city
# Alice,30,NYC
# Bob,25,SF

service = OpenAIBatchService()

# Create payload with global prompt
payload = BatchPayload(
    prompt="Analyze this person: {name}, age {age}, from {city}. What can you infer?",
    model=None,  # Uses OPENAI_MODEL env var
    endpoint=BatchEndpoint.CHAT_COMPLETIONS,
    system_message="You are a data analyst.",
    temperature=0.7
)

# Create batch from local file
mapping = await service.create_batch_from_file(
    file_path="./data.csv",
    payload=payload
)

# Or from URL
mapping = await service.create_batch_from_url(
    url="https://example.com/data.csv",
    payload=payload
)

print(f"Batch created: {mapping.batch_id}")

Row-Level Prompts (Prompt Column Wins):

# Your CSV file: custom_prompts.csv
# custom_id,prompt,context
# req-1,Summarize this: foo bar baz,important
# req-2,Translate to Spanish: hello world,casual

# Row-level 'prompt' column takes precedence over global prompt
payload = BatchPayload(
    # No need for global prompt if data has 'prompt' column
    model="gpt-4",  # Override OPENAI_MODEL env var
    endpoint=BatchEndpoint.CHAT_COMPLETIONS
)

mapping = await service.create_batch_from_file("./custom_prompts.csv", payload)

From In-Memory Data:

data = [
    {"name": "Alice", "question": "What is AI?"},
    {"name": "Bob", "question": "Explain quantum computing"},
]

payload = BatchPayload(
    prompt="Answer {name}'s question: {question}",
    custom_id_prefix="answer"  # Generates answer-1, answer-2, etc.
)

mapping = await service.create_batch_from_data(data, payload)

Check Status and Get Results:

# Check batch status
info = await service.get_batch_status(mapping.batch_id)
print(f"Status: {info.status}")
print(f"Progress: {info.request_counts}")

# Wait for completion (optional)
info = await service.wait_for_completion(mapping.batch_id, poll_interval=60)

# Get results
results = await service.get_batch_results(mapping.batch_id)
for result in results:
    if result.response:
        print(f"{result.custom_id}: {result.response['body']}")
    elif result.error:
        print(f"{result.custom_id}: ERROR - {result.error}")

# Get errors separately (if any)
errors = await service.get_batch_errors(mapping.batch_id)

# Get batch mapping (tracks files)
mapping = service.get_mapping(mapping.batch_id)
print(f"Input: {mapping.input_path}")
print(f"Output: {mapping.output_path}")

Convenience Functions:

from srx_lib_llm import create_batch_from_url, create_batch_from_file, check_batch_status, BatchPayload

# Quick batch from URL
payload = BatchPayload(prompt="Analyze: {text}")
mapping = await create_batch_from_url("https://example.com/data.csv", payload)

# Quick batch from file
mapping = await create_batch_from_file("./data.jsonl", payload)

# Quick status check
info = await check_batch_status(mapping.batch_id)

Advanced Configuration:

payload = BatchPayload(
    prompt="Process: {data}",
    model="gpt-4-turbo",  # Override env var
    endpoint=BatchEndpoint.CHAT_COMPLETIONS,
    system_message="You are an expert analyst.",
    temperature=0.5,
    max_tokens=1000,
    top_p=0.9,
    frequency_penalty=0.5,
    presence_penalty=0.5,
    custom_id_prefix="analysis",
    extra_body_params={"response_format": {"type": "json_object"}}  # Additional params
)

Environment Variables

  • OPENAI_API_KEY (required)
  • OPENAI_MODEL (optional, default: gpt-4.1-nano)
  • TAVILY_API_KEY (optional, for the Tavily tool)

Release

Tag vX.Y.Z to publish to GitHub Packages via Actions.

License

Proprietary © SRX

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

srx_lib_llm-1.11.0.tar.gz (168.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

srx_lib_llm-1.11.0-py3-none-any.whl (47.7 kB view details)

Uploaded Python 3

File details

Details for the file srx_lib_llm-1.11.0.tar.gz.

File metadata

  • Download URL: srx_lib_llm-1.11.0.tar.gz
  • Upload date:
  • Size: 168.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for srx_lib_llm-1.11.0.tar.gz
Algorithm Hash digest
SHA256 e5414240c0d241c938c00a3c65f1d5759529b0e9e3500ab3049a2789319108e2
MD5 3f774a4cd370fb2b6d07edf36c2a1c66
BLAKE2b-256 f8a9806047da25060f151f575ab7be46efa883359714fe5f46853d961370db74

See more details on using hashes here.

File details

Details for the file srx_lib_llm-1.11.0-py3-none-any.whl.

File metadata

  • Download URL: srx_lib_llm-1.11.0-py3-none-any.whl
  • Upload date:
  • Size: 47.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for srx_lib_llm-1.11.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8797d3940ec3224a1e1d3ff7e5a58ab55cee30246a3dceebc7c549a610f62481
MD5 e9599d676f984948da37842f7f9c64a7
BLAKE2b-256 4274da65fab5a978e0f6fd2356b86be95c75e58b949eea13939991fb6df73218

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page