Skip to main content

A set of tools for easily interacting with LLMs.

Project description

Install

uv add py-ai-toolkit

WHAT

A set of tools for easily interacting with LLMs.

WHY

Building AI-driven software leans upon a number of utilities, such as prompt building and LLM calling via HTTP requests. Additionally, writing agents and workflows can prove particularly challenging using conventional code structures.

HOW

This simple library offers a set of predefined functions for:

  • Easy prompting - you need only provide a path or a template
  • Calling LLMs - instructor takes care of that for us
  • Modifying response models - we use Pydantic (duh)

Additionally, we provide grafo out of the box for convenient workflow building.

About Grafo

Grafo (see Recommended Docs below) is a library for building executable DAGs where each node contains a coroutine. Since the DAG abstraction fits particularly well into AI-driven building, we have provided the BaseWorkflow class with the following methods:

  • task for LLM calling
  • redirect to help you manage redirections in your grafo workflows

Examples

Simple text:

from py_ai_toolkit import AIT

ait = AIT("gpt-5")
template = "./prompt.md"
response = ait.chat(template)
print(response.completion)
print(response.content)

Structured response:

from py_ai_toolkit import AIT
from pydantic import BaseModel

class Purchase(BaseModel):
    product: str
    quantity: int

ait = AIT("gpt-5")
template = "./prompt.md" # PROMPT: {{ message }}
message = "I want to buy 5 apples"
response = ait.asend(response_model=Fruit, template=template, message=message)

Structured response with model type injection:

from py_ai_toolkit import AIT
from pydantic import BaseModel

class Purchase(BaseModel):
    product: str
    quantity: int

ait = AIT("gpt-5")
template = "./prompt.md" # PROMPT: {{ message }}
message = "I want to buy 5 apples"
available_fruits = ["apple", "banana", "orange"]
FruitModel = ait.inject_types(Purchase, [
    ("product", Literal[tuple(available_fruits)])
])
response = ait.asend(response_model=Purchase, template=template, message=message)

Using run_task with validation:

from py_ai_toolkit import PyAIToolkit
from py_ai_toolkit.core.domain.interfaces import (
    LLMConfig,
    SingleShotValidationConfig,
)
from pydantic import BaseModel

class Purchase(BaseModel):
    product: str
    quantity: int

ai_toolkit = PyAIToolkit(main_model_config=LLMConfig())

result = await ai_toolkit.run_task(
    template="""
        You will extract a purchase from the following message:
        {{ message }}
    """.strip(),
    response_model=Purchase,
    kwargs=dict(message="I want to buy 5 apples."),
    config=SingleShotValidationConfig(
        issues=["The identified purchase matches the user's request."],
    ),
)

print(result.product)  # "apple"
print(result.quantity)  # 5

Simple workflow:

from py_ai_toolkit import AIT, BaseWorkflow, BaseValidation, Node, TreeExecutor
from pydantic import BaseModel
from typing import Literal

class Purchase(BaseModel):
    product: str
    quantity: int

ait = AIT("gpt-5")
prompts_path = "./"
message = "I want to buy 5 apples"
available_fruits = ["apple", "banana", "orange"]
FruitModel = ait.inject_types(Purchase, [
    ("product", Literal[tuple(available_fruits)])
])

class PurchaseWorkflow(BaseWorkflow):
    def __init__(...):
        ...

    async def run(self, message) -> Purchase:
        purchase_node = Node[FruitModel](
            uuid="fruit purchase node",
            coroutine=self.task,
            kwargs=dict(
                template=f"{prompts_path}/purchase.md",
                response_model=FruitModel,
                message=message,
            )
        )
        validation_node = self.create_validation_node(
            input=message,
            output=purchase_node.output,
            issues=["The identified purchase matches the user's request."],
            source_node=purchase_node,
        )

        await purchase_node.connect(validation_node)
        executor = TreeExecutor(uuid="Purchase Workflow", roots=[purchase_node])
        await executor.run()

        if not purchase_node.output or not validation_node.output:
            raise ValueError("Purchase validation failed.")

        if not validation_node.output.valid:
            raise ValueError("Purchase failed validation.")

        return purchase_node.output

Validation Modes

The run_task method supports three validation modes that control how the LLM output is validated:

SingleShotValidationConfig

  • Count: 1 validation attempt
  • Required Ahead: 1 (needs 1 more success than failure)
  • Max Retries: 3
  • Use Case: Simple validation for straightforward tasks where a single validation check is sufficient
from py_ai_toolkit.core.domain.interfaces import SingleShotValidationConfig

config = SingleShotValidationConfig(
    issues=["The identified purchase matches the user's request."],
)

ThresholdVotingValidationConfig

  • Count: 3 validation attempts (default)
  • Required Ahead: 1 (needs 1 more success than failure)
  • Use Case: Moderate confidence validation where multiple checks provide better reliability
from py_ai_toolkit.core.domain.interfaces import ThresholdVotingValidationConfig

config = ThresholdVotingValidationConfig(
    issues=["The identified purchase matches the user's request."],
)

KAheadVotingValidationConfig

  • Count: 5 validation attempts (default)
  • Required Ahead: 3 (needs 3 more successes than failures)
  • Use Case: High-stakes validation where you need strong consensus across multiple validation checks
from py_ai_toolkit.core.domain.interfaces import KAheadVotingValidationConfig

config = KAheadVotingValidationConfig(
    issues=["The identified purchase matches the user's request."],
)

All validation configs accept an issues parameter, which is a list of validation criteria that will be checked against the task output. Each issue is evaluated independently, and the validation passes only if all issues pass according to the configured mode.

Recommended Docs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

py_ai_toolkit-0.5.1.tar.gz (17.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

py_ai_toolkit-0.5.1-py3-none-any.whl (18.8 kB view details)

Uploaded Python 3

File details

Details for the file py_ai_toolkit-0.5.1.tar.gz.

File metadata

  • Download URL: py_ai_toolkit-0.5.1.tar.gz
  • Upload date:
  • Size: 17.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for py_ai_toolkit-0.5.1.tar.gz
Algorithm Hash digest
SHA256 3b86f5aeef102e58463ddfcc1c125dbf26a01b279bbb114c71898d160613b810
MD5 3ba93a567fa9e7e37e4872b043f400d1
BLAKE2b-256 0729fd157300d07be0d2e2dbe4ec23e5c2656eed3f0dc90eb367188c814b4ed2

See more details on using hashes here.

File details

Details for the file py_ai_toolkit-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: py_ai_toolkit-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 18.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for py_ai_toolkit-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 40288175976f74e7b5cf3b64a480e03361edff8c5428f44334d839571fc81fb1
MD5 5cdaa0d8af88af46ae5d8790dd96d663
BLAKE2b-256 39b8780de0e80cd8887449dc1fce93b8d7d7c811a4f75cf9fdce2a0a7b791492

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page