Skip to main content

Async Azure OpenAI client with typed tool calling and lazy output parsing

Project description

llm-interaction

Async Azure OpenAI client with typed tool calling and lazy output parsing.

Install

pip install llm-interaction

Quick Start

from pathlib import Path
from llm_interaction import LLMInteraction, tool, ToolContext

llm = LLMInteraction(prompt_dir=Path("prompts"))

result = await llm.query(system="Be helpful", user="Hello")
print(result.text)

Output Parsing

Every query() returns an LLMResponse. Parsing is lazy — call the method you need:

# Raw text
result.text

# JSON (extracts last ```json block, falls back to json_repair)
data = result.json()

# YAML
data = result.yaml()

# Scratchpad + JSON: splits reasoning text from structured data
scratchpad, data = result.scratchpad_json()
# scratchpad = "Let me analyze this step by step..."
# data = {"topics": ["ai", "ml"]}

# Scratchpad + YAML
scratchpad, data = result.scratchpad_yaml()

# Pydantic model (validates + auto-retries on failure)
from pydantic import BaseModel

class Analysis(BaseModel):
    topics: list[str]
    confidence: float

analysis = await result.parse(Analysis)
# On validation error, re-queries the LLM with the error message
# using previous_response_id for efficient context chaining

Tool Calling

@tool
def search(query: str, max_results: int = 10) -> list[dict]:
    """Search for documents.

    Args:
        query: The search query string
        max_results: Maximum number of results to return
    """
    return db.search(query, limit=max_results)

@tool(stop=True)
def submit(answer: str) -> str:
    """Submit the final answer."""
    return "done"

result = await llm.agent_loop(
    system="You are a research agent.",
    user="Find info about quantum computing.",
    tools=[search, submit],
)

Jinja Templates

Templates use the naming convention {name}_system.jinja and {name}_user.jinja:

# prompts/research_system.jinja
You are a {{ role }} assistant.

# prompts/research_user.jinja
Find information about {{ topic }}.
result = await llm.query_template(
    prompt_name="research",
    variables={"role": "research", "topic": "quantum computing"},
)

Context Injection

class WeatherAPI:
    def get(self, city: str) -> dict:
        return {"city": city, "temp_c": 22, "condition": "sunny"}

@tool
def get_weather(ctx: ToolContext[WeatherAPI], city: str) -> dict:
    """Get current weather for a city.

    Args:
        city: City name to look up
    """
    return ctx.get(city)

weather_api = WeatherAPI()

result = await llm.agent_loop(
    system="You are a helpful assistant with weather access.",
    user="What's the weather in Oslo?",
    tools=[get_weather],
    context=[weather_api],  # matched by type to ToolContext[WeatherAPI]
)

A single tool can use multiple contexts, each matched by type:

class WeatherAPI:
    def get(self, city: str) -> dict:
        return {"city": city, "temp_c": 22, "condition": "sunny"}

class UserPreferences:
    def __init__(self, unit: str = "celsius"):
        self.unit = unit

@tool
def get_weather(
    weather: ToolContext[WeatherAPI],
    prefs: ToolContext[UserPreferences],
    city: str,
) -> str:
    """Get weather for a city in the user's preferred unit.

    Args:
        city: City name to look up
    """
    data = weather.get(city)
    if prefs.unit == "fahrenheit":
        data["temp_f"] = data["temp_c"] * 9 / 5 + 32
    return data

result = await llm.agent_loop(
    system="You are a weather assistant.",
    user="What's the weather in Oslo?",
    tools=[get_weather],
    context=[WeatherAPI(), UserPreferences(unit="fahrenheit")],
)

Environment Variables

LLM_INTERACTION_API_KEY=your-api-key
LLM_INTERACTION_ENDPOINT=https://your-resource.openai.azure.com
LLM_INTERACTION_MODEL=gpt-4o

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_interaction-0.1.1.tar.gz (18.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_interaction-0.1.1-py3-none-any.whl (13.8 kB view details)

Uploaded Python 3

File details

Details for the file llm_interaction-0.1.1.tar.gz.

File metadata

  • Download URL: llm_interaction-0.1.1.tar.gz
  • Upload date:
  • Size: 18.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for llm_interaction-0.1.1.tar.gz
Algorithm Hash digest
SHA256 e8ad3bd0de62a1ea1fc2fe6de40c92be783a8ca779ec44433c74fcab606dbfba
MD5 b4da09dac8e411bc914fed6a45e18b9b
BLAKE2b-256 e31ef7f0fcff35a6d9b5080a7440ec7e3caf20b75df6994be299ff83798d05b1

See more details on using hashes here.

File details

Details for the file llm_interaction-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_interaction-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 68dde3c0ad4281e5d790a5c4f4336acfe959cf39cf41cebf7d8ae63091aab6df
MD5 b07d4b5db886d40d0d31e2d5f7ed3d80
BLAKE2b-256 3ce9bf876c42d3de1e233d997f7d1fed917f1e81dbb8619379f6eb66e48d4666

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page