Skip to main content

A set of tools for easily interacting with LLMs.

Project description

Install

uv add py-ai-toolkit

WHAT

A set of tools for easily interacting with LLMs.

WHY

Building AI-driven software leans upon a number of utilities, such as prompt building and LLM calling via HTTP requests. Additionally, writing agents and workflows can prove particularly challenging using conventional code structures.

HOW

This simple library offers a set of predefined functions for:

  • Easy prompting - you need only provide a path or a template string
  • Calling LLMs - instructor takes care of structured responses
  • Modifying response models - we use Pydantic

Additionally, we provide grafo out of the box for convenient workflow building.

Configuration

PyAIToolkit reads configuration from environment variables by default:

Variable Description
LLM_MODEL Model identifier (e.g. gpt-4o)
LLM_API_KEY API key
LLM_BASE_URL Base URL for the API
EMBEDDING_MODEL Embedding model identifier
LLM_REASONING_EFFORT Reasoning effort (e.g. low, medium, high)

You can also pass an LLMConfig directly:

from py_ai_toolkit import PyAIToolkit, LLMConfig

toolkit = PyAIToolkit(main_model_config=LLMConfig(model="gpt-4o", api_key="..."))

About Grafo

Grafo (see Recommended Docs below) is a library for building executable DAGs where each node contains a coroutine. Since the DAG abstraction fits particularly well into AI-driven building, we provide the BaseWorkflow class with:

  • task — for LLM calling (structured or plain text)
  • create_task_tree — builds a task + validation subtree
  • build_task_node — wraps a task tree in a single Node for use in larger graphs

Examples

Simple text:

from py_ai_toolkit import PyAIToolkit

toolkit = PyAIToolkit()
template = "./prompt.md"
response = await toolkit.chat(template)
print(response.content)

Structured response:

from py_ai_toolkit import PyAIToolkit
from pydantic import BaseModel

class Purchase(BaseModel):
    product: str
    quantity: int

toolkit = PyAIToolkit()
template = "./prompt.md"  # PROMPT: {{ message }}
response = await toolkit.asend(response_model=Purchase, template=template, message="I want to buy 5 apples")
print(response.content.product)   # "apple"
print(response.content.quantity)  # 5

Structured response with model type injection:

from py_ai_toolkit import PyAIToolkit
from pydantic import BaseModel
from typing import Literal

class Purchase(BaseModel):
    product: str
    quantity: int

toolkit = PyAIToolkit()
available_fruits = ["apple", "banana", "orange"]
FruitModel = toolkit.inject_types(Purchase, [
    ("product", Literal[tuple(available_fruits)])
])
response = await toolkit.asend(response_model=FruitModel, template="./prompt.md", message="I want to buy 5 apples")

Using run_task with validation:

from py_ai_toolkit import PyAIToolkit, LLMConfig
from py_ai_toolkit.core.domain.schemas import SingleShotValidationConfig
from pydantic import BaseModel

class Purchase(BaseModel):
    product: str
    quantity: int

toolkit = PyAIToolkit(main_model_config=LLMConfig())

result = await toolkit.run_task(
    template="""
        You will extract a purchase from the following message:
        {{ message }}
    """.strip(),
    response_model=Purchase,
    kwargs=dict(message="I want to buy 5 apples."),
    config=SingleShotValidationConfig(
        issues=["The identified purchase matches the user's request."],
    ),
)

print(result.product)   # "apple"
print(result.quantity)  # 5

Custom workflow with validation:

from py_ai_toolkit import PyAIToolkit, BaseWorkflow, Node, TreeExecutor
from py_ai_toolkit.core.domain.schemas import SingleShotValidationConfig
from pydantic import BaseModel
from typing import Literal

class Purchase(BaseModel):
    product: str
    quantity: int

toolkit = PyAIToolkit()
available_fruits = ["apple", "banana", "orange"]
FruitModel = toolkit.inject_types(Purchase, [
    ("product", Literal[tuple(available_fruits)])
])

class PurchaseWorkflow(BaseWorkflow):
    async def run(self, message: str) -> Purchase:
        executor = await self.create_task_tree(
            template="./purchase.md",
            response_model=FruitModel,
            kwargs=dict(message=message),
            config=SingleShotValidationConfig(
                issues=["The identified purchase matches the user's request."],
            ),
        )
        results = await executor.run()
        return results[0].output

workflow = PurchaseWorkflow(ai_toolkit=toolkit, error_class=ValueError)
result = await workflow.run("I want to buy 5 apples")

Validation Modes

The run_task method supports three validation modes that control how the LLM output is validated:

SingleShotValidationConfig

  • Count: 1 validation attempt
  • Required Ahead: 1 (needs 1 more success than failure)
  • Max Retries: 3
  • Use Case: Simple validation for straightforward tasks where a single validation check is sufficient
from py_ai_toolkit.core.domain.schemas import SingleShotValidationConfig

config = SingleShotValidationConfig(
    issues=["The identified purchase matches the user's request."],
)

ThresholdVotingValidationConfig

  • Count: 3 validation attempts (default)
  • Required Ahead: 1 (needs 1 more success than failure)
  • Use Case: Moderate confidence validation where multiple checks provide better reliability
from py_ai_toolkit.core.domain.schemas import ThresholdVotingValidationConfig

config = ThresholdVotingValidationConfig(
    issues=["The identified purchase matches the user's request."],
)

KAheadVotingValidationConfig

  • Count: 5 validation attempts (default)
  • Required Ahead: 3 (needs 3 more successes than failures)
  • Use Case: High-stakes validation where you need strong consensus across multiple validation checks
from py_ai_toolkit.core.domain.schemas import KAheadVotingValidationConfig

config = KAheadVotingValidationConfig(
    issues=["The identified purchase matches the user's request."],
)

All validation configs accept an issues parameter — a list of validation criteria checked against the task output. Each issue is evaluated independently, and the validation passes only if all issues pass according to the configured mode.

Recommended Docs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

py_ai_toolkit-0.6.2.tar.gz (21.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

py_ai_toolkit-0.6.2-py3-none-any.whl (21.1 kB view details)

Uploaded Python 3

File details

Details for the file py_ai_toolkit-0.6.2.tar.gz.

File metadata

  • Download URL: py_ai_toolkit-0.6.2.tar.gz
  • Upload date:
  • Size: 21.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for py_ai_toolkit-0.6.2.tar.gz
Algorithm Hash digest
SHA256 f59dcd16c5d811ef8a6195fd079fc42ef51ac2bb5d372286c5a556dcfcb7a605
MD5 a1f53b8c90b6f495f34d8d0ed5edcf59
BLAKE2b-256 6aa63f585666cb53b7997b85dd3ce7e63a46023bf0aae97729afb1239c39a455

See more details on using hashes here.

File details

Details for the file py_ai_toolkit-0.6.2-py3-none-any.whl.

File metadata

  • Download URL: py_ai_toolkit-0.6.2-py3-none-any.whl
  • Upload date:
  • Size: 21.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for py_ai_toolkit-0.6.2-py3-none-any.whl
Algorithm Hash digest
SHA256 0f153b65fd7ed572d65890cb2648873fe632059c41974353992220409927ccc2
MD5 4bf28aae43fdee656cc785acad138029
BLAKE2b-256 fcd5ee8022730e16c0cf2517a72f7346db078742f32cbc8054979e1764e8b0a1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page