Skip to main content

A set of utility functions to handle LLM structural output

Project description

llmutil

This library provides tools to generate structured output and function calling from the OpenAI API.

What is Structured Output?

Structured Output is the recommended method for getting formatted responses. It guarantees that outputs follow your defined schema, making it more reliable than previous methods like JSON mode, function calls, or tool use.

How to Use

Structured Output

To use Structured Output, you need to define a schema using a simple dictionary format:

from llmutil import new_response

output = new_response(
    [
        {
            "role": "user",
            "content": "normalize this address: 1 hacker way, menlo park, california",
        }
    ],
    model="gpt-4.1-mini",
    schema={
        "street": "string",
        "city": "string",
        "state": "string",
    },
)
# {'street': '1 Hacker Way', 'city': 'Menlo Park', 'state': 'CA'}

Function Calling

For function calling, implement the Tooling protocol to define available functions:

from llmutil import Result, Tooling, new_response

def add(a, b):
    return a + b

class UseAdd(Tooling):
    def on_function_call(self, name, args):
        if name == "add":
            return Result(add(args["a"], args["b"]))

    def get_tools(self):
        return {
            "add": {
                "a": "number",
                "b": "number",
            }
        }

messages = [
    {
        "role": "system",
        "content": "you cannot do math. you must use the add() function to add numbers.",
    },
    {
        "role": "user",
        "content": "alice has 10 apples, bob has 20 apples, how many apples do they have in total?",
    },
]

output = new_response(messages, model="gpt-4.1-mini", tooling=UseAdd())
# Alice and Bob have 30 apples in total.

API Reference

new_response(messages, *, model, tooling=None, schema=None, memory=None, timeout=30)

Main function for generating responses from OpenAI API.

  • messages: List of message dictionaries in OpenAI format
  • model: OpenAI model name (e.g., "gpt-4.1-mini")
  • tooling: Optional Tooling implementation for function calling
  • schema: Optional dictionary defining the output schema
  • memory: Optional vector store ID for file search
  • timeout: Request timeout in seconds (default: 30)

Tooling Protocol

Protocol for implementing function calling:

  • on_function_call(self, name: str, args: dict): Handle function calls, return Result to continue or any other value to return immediately
  • get_tools(self): Return dictionary mapping function names to parameter schemas

Result Class

Wrapper for function call results that should continue the conversation:

  • Result(result): Wrap a result to continue the conversation flow

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bfg_llmutil-0.5.3.tar.gz (9.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bfg_llmutil-0.5.3-py3-none-any.whl (8.0 kB view details)

Uploaded Python 3

File details

Details for the file bfg_llmutil-0.5.3.tar.gz.

File metadata

  • Download URL: bfg_llmutil-0.5.3.tar.gz
  • Upload date:
  • Size: 9.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.9

File hashes

Hashes for bfg_llmutil-0.5.3.tar.gz
Algorithm Hash digest
SHA256 e2390a7ca01e1b918784eae4a1df9c5399ccdd8b42d0bcb7643365d54fbe0d3b
MD5 772d1f16ed82a842e5ad755c3cc575a0
BLAKE2b-256 2cd24941ad6089f4928f34202eefb4fb184e15553eb9c69a6bff955c23ac011e

See more details on using hashes here.

File details

Details for the file bfg_llmutil-0.5.3-py3-none-any.whl.

File metadata

File hashes

Hashes for bfg_llmutil-0.5.3-py3-none-any.whl
Algorithm Hash digest
SHA256 f69db18a1a2b9524037af4d2c38a13a05ab12d055486cf32af82213054e54cfa
MD5 892a2f707e364be8b0dd58d59f8d7843
BLAKE2b-256 437fe863876d4649491e87997cef4391d01af267f029adace2c3d62da56cea65

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page