Skip to main content

A set of tools for easily interacting with LLMs.

Project description

Install

uv add py-ai-toolkit

WHAT

A set of tools for easily interacting with LLMs.

WHY

Building AI-driven software leans upon a number of utilities, such as prompt building and LLM calling via HTTP requests. Additionally, writing agents and workflows can prove particularly challenging using conventional code structures.

HOW

This simple library offers a set of predefined functions for:

  • Easy prompting - you need only provide a path or a template string
  • Calling LLMs - instructor takes care of structured responses
  • Modifying response models - we use Pydantic

Additionally, we provide grafo out of the box for convenient workflow building.

Configuration

PyAIToolkit reads configuration from environment variables by default:

Variable Description
LLM_MODEL Model identifier (e.g. gpt-4o)
LLM_API_KEY API key
LLM_BASE_URL Base URL for the API
EMBEDDING_MODEL Embedding model identifier
LLM_REASONING_EFFORT Reasoning effort (e.g. low, medium, high)

You can also pass an LLMConfig directly:

from py_ai_toolkit import PyAIToolkit, LLMConfig

toolkit = PyAIToolkit(main_model_config=LLMConfig(model="gpt-4o", api_key="..."))

About Grafo

Grafo (see Recommended Docs below) is a library for building executable DAGs where each node contains a coroutine. Since the DAG abstraction fits particularly well into AI-driven building, we provide the BaseWorkflow class with:

  • task — for LLM calling (structured or plain text)
  • create_task_tree — builds a task + validation subtree
  • build_task_node — wraps a task tree in a single Node for use in larger graphs

Examples

Simple text:

from py_ai_toolkit import PyAIToolkit

toolkit = PyAIToolkit()
template = "./prompt.md"
response = await toolkit.chat(template)
print(response.content)

Structured response:

from py_ai_toolkit import PyAIToolkit
from pydantic import BaseModel

class Purchase(BaseModel):
    product: str
    quantity: int

toolkit = PyAIToolkit()
template = "./prompt.md"  # PROMPT: {{ message }}
response = await toolkit.asend(response_model=Purchase, template=template, message="I want to buy 5 apples")
print(response.content.product)   # "apple"
print(response.content.quantity)  # 5

Structured response with model type injection:

from py_ai_toolkit import PyAIToolkit
from pydantic import BaseModel
from typing import Literal

class Purchase(BaseModel):
    product: str
    quantity: int

toolkit = PyAIToolkit()
available_fruits = ["apple", "banana", "orange"]
FruitModel = toolkit.inject_types(Purchase, [
    ("product", Literal[tuple(available_fruits)])
])
response = await toolkit.asend(response_model=FruitModel, template="./prompt.md", message="I want to buy 5 apples")

Using run_task with validation:

from py_ai_toolkit import PyAIToolkit, LLMConfig
from py_ai_toolkit.core.domain.schemas import SingleShotValidationConfig
from pydantic import BaseModel

class Purchase(BaseModel):
    product: str
    quantity: int

toolkit = PyAIToolkit(main_model_config=LLMConfig())

result = await toolkit.run_task(
    template="""
        You will extract a purchase from the following message:
        {{ message }}
    """.strip(),
    response_model=Purchase,
    kwargs=dict(message="I want to buy 5 apples."),
    config=SingleShotValidationConfig(
        issues=["The identified purchase matches the user's request."],
    ),
)

print(result.product)   # "apple"
print(result.quantity)  # 5

Custom workflow with validation:

from py_ai_toolkit import PyAIToolkit, BaseWorkflow, Node, TreeExecutor
from py_ai_toolkit.core.domain.schemas import SingleShotValidationConfig
from pydantic import BaseModel
from typing import Literal

class Purchase(BaseModel):
    product: str
    quantity: int

toolkit = PyAIToolkit()
available_fruits = ["apple", "banana", "orange"]
FruitModel = toolkit.inject_types(Purchase, [
    ("product", Literal[tuple(available_fruits)])
])

class PurchaseWorkflow(BaseWorkflow):
    async def run(self, message: str) -> Purchase:
        executor = await self.create_task_tree(
            template="./purchase.md",
            response_model=FruitModel,
            kwargs=dict(message=message),
            config=SingleShotValidationConfig(
                issues=["The identified purchase matches the user's request."],
            ),
        )
        results = await executor.run()
        return results[0].output

workflow = PurchaseWorkflow(ai_toolkit=toolkit, error_class=ValueError)
result = await workflow.run("I want to buy 5 apples")

Validation Modes

The run_task method supports three validation modes that control how the LLM output is validated:

SingleShotValidationConfig

  • Count: 1 validation attempt
  • Required Ahead: 1 (needs 1 more success than failure)
  • Max Retries: 3
  • Use Case: Simple validation for straightforward tasks where a single validation check is sufficient
from py_ai_toolkit.core.domain.schemas import SingleShotValidationConfig

config = SingleShotValidationConfig(
    issues=["The identified purchase matches the user's request."],
)

ThresholdVotingValidationConfig

  • Count: 3 validation attempts (default)
  • Required Ahead: 1 (needs 1 more success than failure)
  • Use Case: Moderate confidence validation where multiple checks provide better reliability
from py_ai_toolkit.core.domain.schemas import ThresholdVotingValidationConfig

config = ThresholdVotingValidationConfig(
    issues=["The identified purchase matches the user's request."],
)

KAheadVotingValidationConfig

  • Count: 5 validation attempts (default)
  • Required Ahead: 3 (needs 3 more successes than failures)
  • Use Case: High-stakes validation where you need strong consensus across multiple validation checks
from py_ai_toolkit.core.domain.schemas import KAheadVotingValidationConfig

config = KAheadVotingValidationConfig(
    issues=["The identified purchase matches the user's request."],
)

All validation configs accept an issues parameter — a list of validation criteria checked against the task output. Each issue is evaluated independently, and the validation passes only if all issues pass according to the configured mode.

Recommended Docs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

py_ai_toolkit-0.7.0.tar.gz (22.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

py_ai_toolkit-0.7.0-py3-none-any.whl (21.5 kB view details)

Uploaded Python 3

File details

Details for the file py_ai_toolkit-0.7.0.tar.gz.

File metadata

  • Download URL: py_ai_toolkit-0.7.0.tar.gz
  • Upload date:
  • Size: 22.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for py_ai_toolkit-0.7.0.tar.gz
Algorithm Hash digest
SHA256 76ccdaba28013de4eb1aa63fc9e220856d00e1897e25c7e3b2192de2449b9fc7
MD5 efce5e65c8a6019c172fb4aa12e0cbb0
BLAKE2b-256 f54b0c1d87020eb1aaae9a9f8b47da726aec3aae65810747f85af1f592a0cebd

See more details on using hashes here.

File details

Details for the file py_ai_toolkit-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: py_ai_toolkit-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 21.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for py_ai_toolkit-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 feddb229fd96a4e3dfc6800af1881d8e6c87612fe356d6b04c2ae8ce7fec2f2a
MD5 a4c4e9a92cc08e0d3bb10cfd99a0b787
BLAKE2b-256 2dd86cdbf3700531de0ee3b2d2ba470ce5debf09c72f2c2dd5f90e9a6a289fef

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page