Add your description here
Project description
██████╗ ██████╗ ██████╗ ██████╗ ████████╗ ██████╗
██╔══██╗██╔══██╗██╔═══██╗██╔════╝ ╚══██╔══╝██╔════╝
██████╔╝██████╔╝██║ ██║██║ ███╗ ██║ ██║
██╔═══╝ ██╔══██╗██║ ██║██║ ██║ ██║ ██║
██║ ██║ ██║╚██████╔╝╚██████╔╝ ██║ ╚██████╗
╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═════╝
by capsa
Programmatic Tool Calling — Let LLM-generated code call your tools, even from inside a sandbox.
The Problem
You want an AI agent to write and execute Python code. Easy enough—spin up an E2B sandbox and let it run. But what if that code needs to call your tools?
The code runs inside a sandbox. Your tools live outside. There's no bridge.
The Solution
progtc creates that bridge. It runs a lightweight server inside your sandbox that exposes your tools to the generated code. When the code calls a tool, the request streams back to your client, you execute it locally, and return the result—all transparently.
Installation
pip install progtc
Or with uv:
uv add progtc
Quick Start
1. Start the Server (inside your sandbox)
progtc serve --host 0.0.0.0 --port 8000 --api-key your-secret-key
2. Execute Code from Your Client
from progtc import AsyncProgtcClient
client = AsyncProgtcClient(
base_url="https://your-sandbox-url:8000",
api_key="your-secret-key",
)
# Define your tools as async functions
async def get_weather(city: str, country: str) -> str:
# Your actual implementation
return f"Weather in {city}, {country}: Sunny, 22°C"
async def search_database(query: str) -> list[dict]:
# Your actual implementation
return [{"id": 1, "name": "Result"}]
# Execute LLM-generated code that uses your tools
code = """
from tools import get_weather
weather = await get_weather("London", "UK")
print(f"The weather is: {weather}")
"""
result = await client.execute_code(
code=code,
tool_call_handlers={
"get_weather": get_weather,
"search_database": search_database,
},
)
print(result.stdout) # "The weather is: Weather in London, UK: Sunny, 22°C"
How It Works
- Your client sends code + a list of available tool names to the progtc server
- The server executes the code in an isolated process, injecting a
toolsmodule - When code calls a tool, the server streams the call back to your client via SSE
- Your client executes the tool locally and sends the result back
- The server resumes code execution with the result
- Stdout/stderr are captured and streamed back when execution completes
Code Requirements
The LLM-generated code must:
- Import tools from the
toolsmodule:from tools import my_tool - Await all tool calls (they're async)
- Use
print()for output — stdout/stderr are captured and returned
from tools import get_weather, search_database
import asyncio
# Call tools like regular async functions
weather, results = await asyncio.gather(
get_weather("Tokyo", "Japan"),
search_database("hotels"),
)
print(f"Weather: {weather}")
print(f"Results: {results}")
Note: The code runs in a top-level async context, so you can use
awaitdirectly without defining an async function.
CLI Options
progtc serve [OPTIONS]
| Option | Default | Description |
|---|---|---|
--host |
127.0.0.1 |
Host to bind to |
--port |
8000 |
Port to bind to |
--api-key |
(env: PROGTC_API_KEY) |
API key for authentication |
--tool-call-timeout |
10.0 |
Timeout for individual tool calls (seconds) |
--code-execution-timeout |
30.0 |
Total timeout for code execution (seconds) |
Error Handling
The client returns a discriminated union—either success or one of several error types:
from progtc.types import MessageType
result = await client.execute_code(code, tool_call_handlers)
match result.type:
case MessageType.SUCCESS:
print(f"Stdout: {result.stdout}")
print(f"Stderr: {result.stderr}")
case MessageType.ERROR:
print(f"Error: {result.message}")
print(f"Code: {result.code}") # compilation, runtime, timeout, etc.
Error codes:
code_compilation_error— Code failed to compile/execcode_runtime_error— Exception raised during executioncode_timeout_error— Execution exceeded timeout
Example: E2B + pydantic-ai
See examples/e2b-example/ for a complete example using progtc with E2B sandboxes and pydantic-ai agents.
The example demonstrates an AI agent that can execute Python code in a secure sandbox while calling tools defined in your application.
License
MIT
Building AI agents? We're hiring: capsa.ai/careers
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file progtc-0.1.1.tar.gz.
File metadata
- Download URL: progtc-0.1.1.tar.gz
- Upload date:
- Size: 49.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.9.16 {"installer":{"name":"uv","version":"0.9.16","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
739d579facad7d06965305fcfa5ca57c38da9a3a40b2d62ad98a917b0761756a
|
|
| MD5 |
f3e21d2c04778da33bb491f351a775f6
|
|
| BLAKE2b-256 |
350003d440914b08e13acc2f56ae4b5c6ee0e24b939f520b683b10a9216cbb12
|
File details
Details for the file progtc-0.1.1-py3-none-any.whl.
File metadata
- Download URL: progtc-0.1.1-py3-none-any.whl
- Upload date:
- Size: 10.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.9.16 {"installer":{"name":"uv","version":"0.9.16","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7aef5c6a6a5d575ea78286938b27551790d3fd299525ef1c44ce8c2d40b09d3c
|
|
| MD5 |
a955ba633df479f213fdab956be4525e
|
|
| BLAKE2b-256 |
05597ab63ea0851f4b8d0474a9d4417a0f276ddcbdf45dec7ebc0e751c0a2a23
|