Skip to main content

Easy Programmatic Tool Calling — framework-agnostic multi-tool execution for any LLM

Project description

ez-ptc

Easy Programmatic Tool Calling — a lightweight, zero-dependency, framework-agnostic library for multi-tool execution with any LLM.

PyPI version Python 3.11+ License: MIT

The problem

Traditional tool calling requires one round-trip per tool call. If the LLM needs to call 3 tools and branch on results, that's 3+ back-and-forth exchanges.

The solution

ez-ptc exposes a single meta-tool that accepts Python code. The LLM writes code that calls multiple tools, uses variables, loops, and conditionals — and ez-ptc executes it in a sandboxed environment. Multiple tool calls, branching logic, and result processing happen in one round-trip.

Installation

# Using uv (recommended)
uv add ez-ptc

# Using pip
pip install ez-ptc

Zero runtime dependencies. Bring your own LLM client.

Quick start

1. Define tools

from typing import TypedDict
from ez_ptc import Toolkit, ez_tool

class WeatherResult(TypedDict):
    location: str
    temp: int
    unit: str
    condition: str

@ez_tool
def get_weather(location: str, unit: str = "celsius") -> WeatherResult:
    """Get current weather for a location.

    Args:
        location: City and state, e.g. "San Francisco, CA"
        unit: Temperature unit - "celsius" or "fahrenheit"
    """
    # Your actual API call here
    return {"location": location, "temp": 22, "unit": unit, "condition": "sunny"}

@ez_tool
def search_products(query: str, limit: int = 5) -> list[dict]:
    """Search the product catalog.

    Args:
        query: Search query string
        limit: Maximum number of results
    """
    return [{"name": "Umbrella", "price": 24.99}]

toolkit = Toolkit([get_weather, search_products])

2. Choose your mode

Prompt mode — framework-free, inject into any system prompt:

from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4.1-mini",
    messages=[
        {"role": "system", "content": toolkit.prompt()},
        {"role": "user", "content": "What's the weather in NYC and SF?"},
    ],
)

code = toolkit.extract_code(response.choices[0].message.content)
result = toolkit.execute(code)
print(result.output)

Tool mode — native integration with any framework:

from openai import OpenAI
import json

client = OpenAI()
execute_fn = toolkit.as_tool()
tool_schema = toolkit.tool_schema(format="openai")

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What's the weather in NYC and SF?"},
]

for turn in range(10):
    response = client.chat.completions.create(
        model="gpt-4.1-mini",
        messages=messages,
        tools=[tool_schema],
    )
    choice = response.choices[0]
    if choice.message.tool_calls:
        messages.append(choice.message)
        for tc in choice.message.tool_calls:
            args = json.loads(tc.function.arguments)
            result = execute_fn(**args)
            messages.append({"role": "tool", "tool_call_id": tc.id, "content": result})
    else:
        print(choice.message.content)
        break

What the LLM writes

Instead of separate tool calls, the LLM writes a single code block:

import asyncio

async def main():
    sf, ny = await asyncio.gather(
        asyncio.to_thread(get_weather, "San Francisco, CA"),
        asyncio.to_thread(get_weather, "New York, NY"),
    )
    print(f"SF: {sf['temp']}°C, {sf['condition']}")
    print(f"NY: {ny['temp']}°C, {ny['condition']}")

asyncio.run(main())

Multiple tool calls, parallel execution, variable passing — one round-trip.

Framework support

ez-ptc works with any LLM provider or framework:

Framework Mode Example
Raw API (OpenAI, Anthropic) Prompt or Tool prompt mode, openai, anthropic
LangChain Tool example
Pydantic AI Tool example
LiteLLM Tool example
Google GenAI Tool example

Key features

  • Zero dependencies — pure Python, bring your own LLM client
  • Two modes — prompt mode (framework-free) or tool mode (native integration)
  • Sandboxed execution — restricted builtins, no file I/O, no networking, configurable timeout
  • Tool chainingassist_tool_chaining=True documents return types so the LLM chains outputs correctly
  • Async supportasyncio is pre-imported, LLMs can use asyncio.gather for parallel execution

Documentation

Full documentation is available in the docs/ directory:

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ez_ptc-0.1.0.tar.gz (238.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ez_ptc-0.1.0-py3-none-any.whl (16.8 kB view details)

Uploaded Python 3

File details

Details for the file ez_ptc-0.1.0.tar.gz.

File metadata

  • Download URL: ez_ptc-0.1.0.tar.gz
  • Upload date:
  • Size: 238.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.5

File hashes

Hashes for ez_ptc-0.1.0.tar.gz
Algorithm Hash digest
SHA256 c36f0fe4da4b1bda731c07dff3498056d24022862529335b334685c348306ad7
MD5 cf1b75420f930ab57bca61fcf8ae4e63
BLAKE2b-256 9c334f9a5c3210ec5e477d9e3ca0afceb052ed23b1a9ac92e70be1792abf9d78

See more details on using hashes here.

File details

Details for the file ez_ptc-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: ez_ptc-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 16.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.5

File hashes

Hashes for ez_ptc-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e5a612db251901149267a20e2ba5d702de810e2682c81260a40d721193e72703
MD5 51bb9f65be603869e14b999ae164a88a
BLAKE2b-256 e5716e1591cb6df5d1e9a2dd559d407b9a2c192cb3529f0b53304524ecc74329

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page