Skip to main content

LLM helpers for SRX services: ChatOpenAI wrapper, tool base, Tavily tool, OpenAI Batch API, and infrastructure-agnostic batch state management

Project description

srx-lib-llm

LLM helpers for SRX services built on LangChain.

What it includes:

  • responses_chat(prompt, cache=False): simple text chat via OpenAI Responses API
  • Tool strategy base and registry
  • Tavily search tool strategy
  • Structured output helpers: build Pydantic model from JSON Schema and generate structured outputs via LLM
  • Request models, e.g. DynamicStructuredOutputRequest
  • OpenAI Batch API service: comprehensive wrapper for asynchronous batch processing with 50% cost savings

Designed to work with official OpenAI only.

Install

PyPI (public):

  • pip install srx-lib-llm

uv (pyproject):

[project]
dependencies = ["srx-lib-llm>=0.1.0"]

Usage

from srx_lib_llm import responses_chat
text = await responses_chat("Hello there", cache=True)

Structured output from JSON Schema:

from srx_lib_llm import StructuredOutputGenerator, build_model_from_schema, preprocess_json_schema

json_schema = {
  "type": "object",
  "properties": {
    "title": {"type": "string"},
    "score": {"type": "number"}
  },
  "required": ["title"]
}

gen = StructuredOutputGenerator()
model = build_model_from_schema("MyOutput", preprocess_json_schema(json_schema))
result = await gen.generate_from_model("Give me a title and score", model)
print(result.model_dump())

All-in-one extraction:

from srx_lib_llm import extract_structured

result = await extract_structured(
    text="Analyze this text...", json_schema=my_schema, schema_name="MyOutput"
)
print(result.model_dump())

Back-compat helpers and request models:

from srx_lib_llm import create_dynamic_schema, DynamicStructuredOutputRequest

schema_model = create_dynamic_schema("MyOutput", json_schema)
payload = DynamicStructuredOutputRequest(text="...", json_schema=json_schema)

Tools:

from srx_lib_llm.tools import ToolStrategyBase, register_strategy, get_strategies
from srx_lib_llm.tools.tavily import TavilyToolStrategy

register_strategy(TavilyToolStrategy())
strategies = get_strategies()

Optional Langfuse Tracing

Set Langfuse environment variables to enable tracing for all LangChain and LangGraph flows. Without these values the library runs exactly as before.

LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
# Optional, defaults to https://cloud.langfuse.com
LANGFUSE_BASE_URL=https://cloud.langfuse.com

When available, Langfuse's CallbackHandler is attached automatically to:

  • responses_chat
  • Structured output helpers
  • LangGraph agents created through ToolStrategyBase

OpenAI Batch API

Process large volumes of requests asynchronously with 50% cost savings.

Key Features:

  • Supports CSV, JSONL, and NDJSON data files
  • Smart prompt handling: row-level or global with variable interpolation
  • Uses OPENAI_MODEL env var for model selection
  • Automatically handles file format detection
  • Track batch progress and retrieve results

Basic Usage with CSV Data:

from srx_lib_llm import OpenAIBatchService, BatchPayload, BatchEndpoint

# Your CSV file: data.csv
# name,age,city
# Alice,30,NYC
# Bob,25,SF

service = OpenAIBatchService()

# Create payload with global prompt
payload = BatchPayload(
    prompt="Analyze this person: {name}, age {age}, from {city}. What can you infer?",
    model=None,  # Uses OPENAI_MODEL env var
    endpoint=BatchEndpoint.CHAT_COMPLETIONS,
    system_message="You are a data analyst.",
    temperature=0.7
)

# Create batch from local file
mapping = await service.create_batch_from_file(
    file_path="./data.csv",
    payload=payload
)

# Or from URL
mapping = await service.create_batch_from_url(
    url="https://example.com/data.csv",
    payload=payload
)

print(f"Batch created: {mapping.batch_id}")

Row-Level Prompts (Prompt Column Wins):

# Your CSV file: custom_prompts.csv
# custom_id,prompt,context
# req-1,Summarize this: foo bar baz,important
# req-2,Translate to Spanish: hello world,casual

# Row-level 'prompt' column takes precedence over global prompt
payload = BatchPayload(
    # No need for global prompt if data has 'prompt' column
    model="gpt-4",  # Override OPENAI_MODEL env var
    endpoint=BatchEndpoint.CHAT_COMPLETIONS
)

mapping = await service.create_batch_from_file("./custom_prompts.csv", payload)

From In-Memory Data:

data = [
    {"name": "Alice", "question": "What is AI?"},
    {"name": "Bob", "question": "Explain quantum computing"},
]

payload = BatchPayload(
    prompt="Answer {name}'s question: {question}",
    custom_id_prefix="answer"  # Generates answer-1, answer-2, etc.
)

mapping = await service.create_batch_from_data(data, payload)

Check Status and Get Results:

# Check batch status
info = await service.get_batch_status(mapping.batch_id)
print(f"Status: {info.status}")
print(f"Progress: {info.request_counts}")

# Wait for completion (optional)
info = await service.wait_for_completion(mapping.batch_id, poll_interval=60)

# Get results
results = await service.get_batch_results(mapping.batch_id)
for result in results:
    if result.response:
        print(f"{result.custom_id}: {result.response['body']}")
    elif result.error:
        print(f"{result.custom_id}: ERROR - {result.error}")

# Get errors separately (if any)
errors = await service.get_batch_errors(mapping.batch_id)

# Get batch mapping (tracks files)
mapping = service.get_mapping(mapping.batch_id)
print(f"Input: {mapping.input_path}")
print(f"Output: {mapping.output_path}")

Convenience Functions:

from srx_lib_llm import create_batch_from_url, create_batch_from_file, check_batch_status, BatchPayload

# Quick batch from URL
payload = BatchPayload(prompt="Analyze: {text}")
mapping = await create_batch_from_url("https://example.com/data.csv", payload)

# Quick batch from file
mapping = await create_batch_from_file("./data.jsonl", payload)

# Quick status check
info = await check_batch_status(mapping.batch_id)

Advanced Configuration:

payload = BatchPayload(
    prompt="Process: {data}",
    model="gpt-4-turbo",  # Override env var
    endpoint=BatchEndpoint.CHAT_COMPLETIONS,
    system_message="You are an expert analyst.",
    temperature=0.5,
    max_tokens=1000,
    top_p=0.9,
    frequency_penalty=0.5,
    presence_penalty=0.5,
    custom_id_prefix="analysis",
    extra_body_params={"response_format": {"type": "json_object"}}  # Additional params
)

Environment Variables

  • OPENAI_API_KEY (required)
  • OPENAI_MODEL (optional, default: gpt-4.1-nano)
  • TAVILY_API_KEY (optional, for the Tavily tool)

Release

Tag vX.Y.Z to publish to GitHub Packages via Actions.

License

Proprietary © SRX

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

srx_lib_llm-1.9.5.tar.gz (150.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

srx_lib_llm-1.9.5-py3-none-any.whl (38.6 kB view details)

Uploaded Python 3

File details

Details for the file srx_lib_llm-1.9.5.tar.gz.

File metadata

  • Download URL: srx_lib_llm-1.9.5.tar.gz
  • Upload date:
  • Size: 150.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for srx_lib_llm-1.9.5.tar.gz
Algorithm Hash digest
SHA256 7f92c5561d65a4a8cd03d6dd71511778b9a30ff96db92b2fb15ae4dadfa7eecc
MD5 3a3216dd944adae50a09a911baee8ab9
BLAKE2b-256 a865e97760c2b5f9765f36319cc6d5578b78ffdf61bb3b826d0ec8bf1d88ad8f

See more details on using hashes here.

File details

Details for the file srx_lib_llm-1.9.5-py3-none-any.whl.

File metadata

  • Download URL: srx_lib_llm-1.9.5-py3-none-any.whl
  • Upload date:
  • Size: 38.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for srx_lib_llm-1.9.5-py3-none-any.whl
Algorithm Hash digest
SHA256 9febb5eb953106d15cc2821f18cd1ade2f0da871392d50d7902c2e5fe3740a48
MD5 3ed37bc106b8edaab906004bd6cd9ca3
BLAKE2b-256 5816f759f07713307af3a6dc684c56212e9a87c195525f7a81b1e8fc1d1936ec

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page