Centralize .prompt files, produce structured outputs, and enable agents to author prompts via SKILLS.
Project description

PromptCaller
PromptCaller centralizes .prompt files in one folder, turns prompt responses into structured outputs, and includes SKILLS so agents can create and maintain prompt files consistently.
Features
- Load prompts from a
.promptfile containing a YAML configuration and a message template. - Invoke prompts using LangChain and OpenAI API, with support for structured output.
Installation
To install the package, simply run:
pip install prompt-caller
You will also need an .env file that contains your OpenAI API key:
OPENAI_API_KEY=your_openai_api_key_here
CLI Skill Installation
PromptCaller ships with a CLI command to install a PromptCaller skill pack into .agents/skills.
prompt-caller install
Alternative module invocation:
python -m prompt_caller install
By default, this installs to .agents/skills/prompt-caller and overwrites existing files.
Usage
- Define a prompt file:
Create a .prompt file in the prompts directory, e.g., prompts/sample.prompt:
---
model: gpt-5.2
reasoning_effort: medium
output:
result: "Final result of the expression"
explanation: "Explanation of the calculation"
---
<system>
You are a helpful assistant.
</system>
<user>
How much is {{expression}}?
</user>
This .prompt file contains:
- A YAML-like header for configuring the model and parameters.
- A template body using Jinja2 to inject the context (like
{{ expression }}). - Messages structured in a JSX-like format (
<system>,<user>).
- Load and call a prompt:
from prompt_caller import PromptCaller
ai = PromptCaller()
response = ai.call("sample", {"expression": "3+8/9"})
print(response)
In this example:
- The
expressionvalue3+8/9is injected into the user message. - The model will respond with both the result of the expression and an explanation, as specified in the
outputsection of the prompt.
Advanced Prompt Example (sample-5.2-complete.prompt)
Use this when you want strongly typed, multi-field structured output with reusable types:
---
model: gpt-5.2
reasoning_effort: high
output:
result: "number | Final result of the expression"
explanation: "string | Explanation of the calculation"
steps: "list[Step] | Ordered calculation steps"
confidence: "enum[low|medium|high] | Confidence level for the computed answer."
types:
Step:
expression: "string | Expression evaluated in this step"
value: "number | Numeric result of this step"
---
<system>
You are a helpful assistant and you have access to tools.
Use tools when needed.
Return all requested structured fields.
</system>
<user>
How much is {{expression}}?
</user>
Example call:
from prompt_caller import PromptCaller
ai = PromptCaller()
response = ai.call("sample-5.2-complete", {"expression": "(3 + 8) / 9"})
print(response)
- Using the agent feature:
The agent method allows you to enhance the prompt's functionality by integrating external tools. Here's an example where we evaluate a mathematical expression using Python�s eval in a safe execution environment:
from prompt_caller import PromptCaller
ai = PromptCaller()
def evaluate_expression(expression: str):
"""
Evaluate a math expression using eval.
"""
safe_globals = {"__builtins__": None}
return eval(expression, safe_globals, {})
response = ai.agent(
"sample-agent", {"expression": "3+8/9"}, tools=[evaluate_expression]
)
print(response)
In this example:
- The
agentmethod is used to process the prompt while integrating external tools. - The
evaluate_expressionfunction evaluates the mathematical expression securely. - The response includes the processed result based on the prompt and tool execution.
How It Works
- _loadPrompt: Loads the prompt file, splits the YAML header from the body, and parses them.
- _renderTemplate: Uses the Jinja2 template engine to render the body with the provided context.
- _parseJSXBody: Parses the message body written in JSX-like tags to extract system and user messages.
- call: Invokes the OpenAI API with the parsed configuration and messages, and handles structured output via dynamic Pydantic models.
Build and Upload
To build the distribution and upload it to a package repository like PyPI, follow these steps:
-
Build the distribution:
Run the following command to create both source (
sdist) and wheel (bdist_wheel) distributions:python setup.py sdist bdist_wheel
This will generate the distribution files in the
dist/directory. -
Upload to PyPI using Twine:
Use
twineto securely upload the distribution to PyPI:twine upload dist/*
Ensure you have configured your PyPI credentials before running this command. You can find more information on configuring credentials in the Twine documentation.
Tests
pytest --cov=prompt_caller ; coverage report --sort=miss
Output Schema DSL
The output field supports both legacy and typed schema definitions.
Legacy format (defaults to string):
output:
answer: "Final answer to return"
Compact DSL format (recommended):
output:
title: "string | Final title"
confidence: "enum[low|medium|high] | Confidence level"
steps: "list[Step] | Ordered calculation steps"
note?: "string | Optional extra note"
Supported type expressions:
stringnumberintegerbooleanlist[T]enum[a|b|c]- named type references declared under top-level
types
Named reusable types:
types:
Step:
expression: "string | Expression evaluated in this step"
value: "number | Numeric result of this step"
Rules:
- Optional fields are declared with
?suffix (for examplenote?). call()uses promptoutputwhen present.agent()uses promptoutputonly whenoutput=is not explicitly passed.
License
This project is licensed under the Apache License 2.0. You may use, modify, and distribute this software as long as you provide proper attribution and include the full text of the license in any distributed copies or derivative works.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file prompt_caller-0.5.2.tar.gz.
File metadata
- Download URL: prompt_caller-0.5.2.tar.gz
- Upload date:
- Size: 22.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
608f05a0dae9e09078dc507e8893d5d014199fb568f8dee7aa19578cf4e21fa6
|
|
| MD5 |
07f8f201c62f645fbe5922842c40b187
|
|
| BLAKE2b-256 |
7a97216d4dc3976fb22daa5219b2896369c1c55eba6ec732676cc1ff73fcab08
|
File details
Details for the file prompt_caller-0.5.2-py3-none-any.whl.
File metadata
- Download URL: prompt_caller-0.5.2-py3-none-any.whl
- Upload date:
- Size: 16.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bfdd7a3fd18fd7094d4111ebdda06bf083fa9d225791efbe35882d746bb259ef
|
|
| MD5 |
c44ed2f0d68162ca9d258dd52c64a132
|
|
| BLAKE2b-256 |
2aa5dd4c8f6023b82a488b568eaaf7f2f62b4454d078c85838acbcb257f120a5
|