Skip to main content

This package is responsible for calling prompts in a specific format. It uses LangChain and OpenAI API

Project description

PromptCaller

PromptCaller is a Python package for calling prompts in a specific format, using LangChain and the OpenAI API. It enables users to load prompts from a template, render them with contextual data, and make structured requests to the OpenAI API.

Features

  • Load prompts from a .prompt file containing a YAML configuration and a message template.
  • Invoke prompts using LangChain and OpenAI API, with support for structured output.

Installation

To install the package, simply run:

pip install prompt-caller

You will also need an .env file that contains your OpenAI API key:

OPENAI_API_KEY=your_openai_api_key_here

Usage

  1. Define a prompt file:

Create a .prompt file in the prompts directory, e.g., prompts/sample.prompt:

---
model: gpt-4o-mini
temperature: 0.7
max_tokens: 512
output:
  result: "Final result of the expression"
  explanation: "Explanation of the calculation"
---
<system>
You are a helpful assistant.
</system>

<user>
How much is {{expression}}?
</user>

This .prompt file contains:

  • A YAML header for configuring the model and parameters.
  • A template body using Jinja2 to inject the context (like {{ expression }}).
  • Messages structured in a JSX-like format (<system>, <user>).
  1. Load and call a prompt:
from prompt_caller import PromptCaller

ai = PromptCaller()

response = ai.call("sample", {"expression": "3+8/9"})

print(response)

In this example:

  • The expression value 3+8/9 is injected into the user message.
  • The model will respond with both the result of the expression and an explanation, as specified in the output section of the prompt.
  1. Using the agent feature:

The agent method allows you to enhance the prompt's functionality by integrating external tools. Here’s an example where we evaluate a mathematical expression using Python’s eval in a safe execution environment:

from prompt_caller import PromptCaller

ai = PromptCaller()

def evaluate_expression(expression: str):
      """
      Evaluate a math expression using eval.
      """
      safe_globals = {"__builtins__": None}
      return eval(expression, safe_globals, {})

response = ai.agent(
      "sample-agent", {"expression": "3+8/9"}, tools=[evaluate_expression]
)

print(response)

In this example:

  • The agent method is used to process the prompt while integrating external tools.
  • The evaluate_expression function evaluates the mathematical expression securely.
  • The response includes the processed result based on the prompt and tool execution.

How It Works

  1. _loadPrompt: Loads the prompt file, splits the YAML header from the body, and parses them.
  2. _renderTemplate: Uses the Jinja2 template engine to render the body with the provided context.
  3. _parseJSXBody: Parses the message body written in JSX-like tags to extract system and user messages.
  4. call: Invokes the OpenAI API with the parsed configuration and messages, and handles structured output via dynamic Pydantic models.

Build and Upload

To build the distribution and upload it to a package repository like PyPI, follow these steps:

  1. Build the distribution:

    Run the following command to create both source (sdist) and wheel (bdist_wheel) distributions:

    python setup.py sdist bdist_wheel
    

    This will generate the distribution files in the dist/ directory.

  2. Upload to PyPI using Twine:

    Use twine to securely upload the distribution to PyPI:

    twine upload dist/*
    

    Ensure you have configured your PyPI credentials before running this command. You can find more information on configuring credentials in the Twine documentation.

License

This project is licensed under the Apache License 2.0. You may use, modify, and distribute this software as long as you provide proper attribution and include the full text of the license in any distributed copies or derivative works.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

prompt_caller-0.2.4.tar.gz (10.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

prompt_caller-0.2.4-py3-none-any.whl (10.9 kB view details)

Uploaded Python 3

File details

Details for the file prompt_caller-0.2.4.tar.gz.

File metadata

  • Download URL: prompt_caller-0.2.4.tar.gz
  • Upload date:
  • Size: 10.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for prompt_caller-0.2.4.tar.gz
Algorithm Hash digest
SHA256 ba1a0ee6134527b50aca760be73aa3a4be069ed98e754d8b942273b569d7ea42
MD5 712187a4f325c5b92b91a015b072a70b
BLAKE2b-256 70670285d24bcef6d07dee044744b7dad75cea46a82830f947c2a67c234bd4b5

See more details on using hashes here.

File details

Details for the file prompt_caller-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: prompt_caller-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 10.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for prompt_caller-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 bba4cb11c1a7d28627d139d4cdd8fcbb78d5bc87b764d9aea09533925e6638e3
MD5 1fae74f855cca0c8a2922705c3666ac4
BLAKE2b-256 ec3e464cfb6aaa60c8444efe9f156fa465793dd4e0737cc6c9d76c7a24f71405

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page