Skip to main content

Centralize .prompt files, produce structured outputs, and enable agents to author prompts via SKILLS.

Project description

PyPI - Python Version PyPI

PyPI - Downloads Discord YouTube Channel Subscribers

Logo

PromptCaller

PromptCaller centralizes .prompt files in one folder, turns prompt responses into structured outputs, and includes SKILLS so agents can create and maintain prompt files consistently.

Features

  • Load prompts from a .prompt file containing a YAML configuration and a message template.
  • Invoke prompts using LangChain and OpenAI API, with support for structured output.

Installation

To install the package, simply run:

pip install prompt-caller

You will also need an .env file that contains your OpenAI API key:

OPENAI_API_KEY=your_openai_api_key_here

CLI Skill Installation

PromptCaller ships with a CLI command to install a PromptCaller skill pack into .agents/skills.

prompt-caller install

Alternative module invocation:

python -m prompt_caller install

By default, this installs to .agents/skills/prompt-caller and overwrites existing files.

Usage

  1. Define a prompt file:

Create a .prompt file in the prompts directory, e.g., prompts/sample.prompt:

---
model: gpt-5.2
reasoning_effort: medium
output:
  result: "Final result of the expression"
  explanation: "Explanation of the calculation"
---
<system>
You are a helpful assistant.
</system>

<user>
How much is {{expression}}?
</user>

This .prompt file contains:

  • A YAML-like header for configuring the model and parameters.
  • A template body using Jinja2 to inject the context (like {{ expression }}).
  • Messages structured in a JSX-like format (<system>, <user>).
  1. Load and call a prompt:
from prompt_caller import PromptCaller

ai = PromptCaller()

response = ai.call("sample", {"expression": "3+8/9"})

print(response)

In this example:

  • The expression value 3+8/9 is injected into the user message.
  • The model will respond with both the result of the expression and an explanation, as specified in the output section of the prompt.

Advanced Prompt Example (sample-5.2-complete.prompt)

Use this when you want strongly typed, multi-field structured output with reusable types:

---
model: gpt-5.2
reasoning_effort: high
output:
  result: "number | Final result of the expression"
  explanation: "string | Explanation of the calculation"
  steps: "list[Step] | Ordered calculation steps"
  confidence: "enum[low|medium|high] | Confidence level for the computed answer."

types:
  Step:
    expression: "string | Expression evaluated in this step"
    value: "number | Numeric result of this step"
---
<system>
  You are a helpful assistant and you have access to tools.
  Use tools when needed.
  Return all requested structured fields.
</system>

<user>
  How much is {{expression}}?
</user>

Example call:

from prompt_caller import PromptCaller

ai = PromptCaller()

response = ai.call("sample-5.2-complete", {"expression": "(3 + 8) / 9"})
print(response)
  1. Using the agent feature:

The agent method allows you to enhance the prompt's functionality by integrating external tools. Here's an example where we evaluate a mathematical expression using Python�s eval in a safe execution environment:

from prompt_caller import PromptCaller

ai = PromptCaller()

def evaluate_expression(expression: str):
      """
      Evaluate a math expression using eval.
      """
      safe_globals = {"__builtins__": None}
      return eval(expression, safe_globals, {})

response = ai.agent(
      "sample-agent", {"expression": "3+8/9"}, tools=[evaluate_expression]
)

print(response)

In this example:

  • The agent method is used to process the prompt while integrating external tools.
  • The evaluate_expression function evaluates the mathematical expression securely.
  • The response includes the processed result based on the prompt and tool execution.

How It Works

  1. _loadPrompt: Loads the prompt file, splits the YAML header from the body, and parses them.
  2. _renderTemplate: Uses the Jinja2 template engine to render the body with the provided context.
  3. _parseJSXBody: Parses the message body written in JSX-like tags to extract system and user messages.
  4. call: Invokes the OpenAI API with the parsed configuration and messages, and handles structured output via dynamic Pydantic models.

Build and Upload

To build the distribution and upload it to a package repository like PyPI, follow these steps:

  1. Build the distribution:

    Run the following command to create both source (sdist) and wheel (bdist_wheel) distributions:

    python setup.py sdist bdist_wheel
    

    This will generate the distribution files in the dist/ directory.

  2. Upload to PyPI using Twine:

    Use twine to securely upload the distribution to PyPI:

    twine upload dist/*
    

    Ensure you have configured your PyPI credentials before running this command. You can find more information on configuring credentials in the Twine documentation.

Tests

pytest --cov=prompt_caller ; coverage report --sort=miss

Output Schema DSL

The output field supports both legacy and typed schema definitions.

Legacy format (defaults to string):

output:
  answer: "Final answer to return"

Compact DSL format (recommended):

output:
  title: "string | Final title"
  confidence: "enum[low|medium|high] | Confidence level"
  steps: "list[Step] | Ordered calculation steps"
  note?: "string | Optional extra note"

Supported type expressions:

  • string
  • number
  • integer
  • boolean
  • list[T]
  • enum[a|b|c]
  • named type references declared under top-level types

Named reusable types:

types:
  Step:
    expression: "string | Expression evaluated in this step"
    value: "number | Numeric result of this step"

Rules:

  • Optional fields are declared with ? suffix (for example note?).
  • call() uses prompt output when present.
  • agent() uses prompt output only when output= is not explicitly passed.

License

This project is licensed under the Apache License 2.0. You may use, modify, and distribute this software as long as you provide proper attribution and include the full text of the license in any distributed copies or derivative works.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

prompt_caller-0.5.3.tar.gz (22.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

prompt_caller-0.5.3-py3-none-any.whl (16.8 kB view details)

Uploaded Python 3

File details

Details for the file prompt_caller-0.5.3.tar.gz.

File metadata

  • Download URL: prompt_caller-0.5.3.tar.gz
  • Upload date:
  • Size: 22.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for prompt_caller-0.5.3.tar.gz
Algorithm Hash digest
SHA256 12e4425fa705e910c42a91f18c0b60f781d80dba61b51b102950663c5fd86ae4
MD5 cb878b51a7059957cbb6f68f1a5b6f00
BLAKE2b-256 847fc942f65a5b44fd65cdbfd2b8e3914dd99d4a3b39e102a46db9f7eade1014

See more details on using hashes here.

File details

Details for the file prompt_caller-0.5.3-py3-none-any.whl.

File metadata

  • Download URL: prompt_caller-0.5.3-py3-none-any.whl
  • Upload date:
  • Size: 16.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for prompt_caller-0.5.3-py3-none-any.whl
Algorithm Hash digest
SHA256 60ffe8c3e201b6d972e8b070002ae75a016a9b29801938c9843b2310ccbb45e4
MD5 5c7a5ff8320edb18ab6446675dae70e3
BLAKE2b-256 52e94bd01dbf6a20fbb353bb340d86764933f4cc7dc63034d0def75b748ca5c8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page