Async Azure OpenAI client with typed tool calling and lazy output parsing
Project description
llm-interaction
Async Azure OpenAI client with typed tool calling and lazy output parsing.
Install
pip install llm-interaction
Quick Start
from pathlib import Path
from llm_interaction import LLMInteraction, tool, ToolContext
llm = LLMInteraction(prompt_dir=Path("prompts"))
result = await llm.query(system="Be helpful", user="Hello")
print(result.text)
Output Parsing
Every query() returns an LLMResponse. Parsing is lazy — call the method you need:
# Raw text
result.text
# JSON (extracts last ```json block, falls back to json_repair)
data = result.json()
# YAML
data = result.yaml()
# Scratchpad + JSON: splits reasoning text from structured data
scratchpad, data = result.scratchpad_json()
# scratchpad = "Let me analyze this step by step..."
# data = {"topics": ["ai", "ml"]}
# Scratchpad + YAML
scratchpad, data = result.scratchpad_yaml()
# Pydantic model (validates + auto-retries on failure)
from pydantic import BaseModel
class Analysis(BaseModel):
topics: list[str]
confidence: float
analysis = await result.parse(Analysis)
# On validation error, re-queries the LLM with the error message
# using previous_response_id for efficient context chaining
Tool Calling
@tool
def search(query: str, max_results: int = 10) -> list[dict]:
"""Search for documents.
Args:
query: The search query string
max_results: Maximum number of results to return
"""
return db.search(query, limit=max_results)
@tool(stop=True)
def submit(answer: str) -> str:
"""Submit the final answer."""
return "done"
result = await llm.agent_loop(
system="You are a research agent.",
user="Find info about quantum computing.",
tools=[search, submit],
)
Jinja Templates
Templates use the naming convention {name}_system.jinja and {name}_user.jinja:
# prompts/research_system.jinja
You are a {{ role }} assistant.
# prompts/research_user.jinja
Find information about {{ topic }}.
result = await llm.query_template(
prompt_name="research",
variables={"role": "research", "topic": "quantum computing"},
)
Context Injection
class WeatherAPI:
def get(self, city: str) -> dict:
return {"city": city, "temp_c": 22, "condition": "sunny"}
@tool
def get_weather(ctx: ToolContext[WeatherAPI], city: str) -> dict:
"""Get current weather for a city.
Args:
city: City name to look up
"""
return ctx.get(city)
weather_api = WeatherAPI()
result = await llm.agent_loop(
system="You are a helpful assistant with weather access.",
user="What's the weather in Oslo?",
tools=[get_weather],
context=[weather_api], # matched by type to ToolContext[WeatherAPI]
)
A single tool can use multiple contexts, each matched by type:
class WeatherAPI:
def get(self, city: str) -> dict:
return {"city": city, "temp_c": 22, "condition": "sunny"}
class UserPreferences:
def __init__(self, unit: str = "celsius"):
self.unit = unit
@tool
def get_weather(
weather: ToolContext[WeatherAPI],
prefs: ToolContext[UserPreferences],
city: str,
) -> str:
"""Get weather for a city in the user's preferred unit.
Args:
city: City name to look up
"""
data = weather.get(city)
if prefs.unit == "fahrenheit":
data["temp_f"] = data["temp_c"] * 9 / 5 + 32
return data
result = await llm.agent_loop(
system="You are a weather assistant.",
user="What's the weather in Oslo?",
tools=[get_weather],
context=[WeatherAPI(), UserPreferences(unit="fahrenheit")],
)
Environment Variables
LLM_INTERACTION_API_KEY=your-api-key
LLM_INTERACTION_ENDPOINT=https://your-resource.openai.azure.com
LLM_INTERACTION_MODEL=gpt-4o
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_interaction-0.1.0.tar.gz.
File metadata
- Download URL: llm_interaction-0.1.0.tar.gz
- Upload date:
- Size: 18.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7f7b88730c2aecd45cce0579e56bbba1b3bae89bd881bf68a72905fabbe5829b
|
|
| MD5 |
263609252d185b7668389aff2b15755d
|
|
| BLAKE2b-256 |
13a322d7b8d5286079dd500ba331ebfcf7bca3b67ccb69d69f51db5d4b298da9
|
File details
Details for the file llm_interaction-0.1.0-py3-none-any.whl.
File metadata
- Download URL: llm_interaction-0.1.0-py3-none-any.whl
- Upload date:
- Size: 13.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4dd159769acd74f757f71c670cddd9e4d837b9b63e06b68731b18492793ab045
|
|
| MD5 |
3645267a0844c2811c0b1c718ffaa586
|
|
| BLAKE2b-256 |
b4b973aff74c86f7848c25c76800656acfd59da792bc4b15cfbb559547a46e60
|