Skip to main content

A Python library for creating and executing chains of prompts using OpenAI's SDK with streaming support and template formatting.

Project description

PromptChain

A Mini Python library for creating and executing chains of prompts using OpenAI's API with streaming support and output template formatting.

Features

  • Sequential prompt chain execution
  • Streaming responses
  • Template-based output formatting
  • System prompt support
  • Placeholder replacement between prompts
  • Multiple output formats (JSON, Markdown, CSV, Text)
  • Async/await support

Installation

For Users

pip install -r requirements/requirements.txt

For Developers

pip install -r requirements/requirements.txt
pip install -r requirements/requirements-dev.txt

Quick Start

from tasks_prompts_chain import TasksPromptsChain

async def main():
    # Initialize the chain
    chain = TasksPromptsChain(
        model="gpt-3.5-turbo",
        api_key="your-api-key",
        final_result_placeholder="design_result"
    )

    # Define your prompts
    prompts = [
        {
            "prompt": "Create a design concept for a luxury chocolate bar",
            "output_format": "TEXT",
            "output_placeholder": "design_concept"
        },
        {
            "prompt": "Based on this concept: {{design_concept}}, suggest a color palette",
            "output_format": "JSON",
            "output_placeholder": "color_palette"
        }
    ]

    # Stream the responses
    async for chunk in chain.execute_chain(prompts):
        print(chunk, end="", flush=True)

    # Get specific results
    design = chain.get_result("design_concept")
    colors = chain.get_result("color_palette")

Advanced Usage

Using Templates

# Set output template before execution
chain.template_output("""
<result>
    <design>
    ### Design Concept:
    {{design_concept}}
    </design>
    
    <colors>
    ### Color Palette:
    {{color_palette}}
    </colors>
</result>
""")

Using System Prompts

chain = TasksPromptsChain(
    model="gpt-3.5-turbo",
    api_key="your-api-key",
    final_result_placeholder="result",
    system_prompt="You are a professional design expert specialized in luxury products",
    system_apply_to_all_prompts=True
)

Custom API Endpoint

chain = TasksPromptsChain(
    model="gpt-3.5-turbo",
    api_key="your-api-key",
    final_result_placeholder="result",
    base_url="https://your-custom-endpoint.com/v1"
)

API Reference

TasksPromptsChain Class

Constructor Parameters

  • model (str): The model identifier (e.g., 'gpt-3.5-turbo')
  • api_key (str): Your OpenAI API key
  • final_result_placeholder (str): Name for the final result placeholder
  • system_prompt (Optional[str]): System prompt for context
  • system_apply_to_all_prompts (Optional[bool]): Apply system prompt to all prompts
  • base_url (Optional[str]): Custom API endpoint URL

Methods

  • execute_chain(prompts: List[Dict], temperature: float = 0.7) -> AsyncGenerator[str, None]

    • Executes the prompt chain and streams responses
  • template_output(template: str) -> None

    • Sets the output template format
  • get_result(placeholder: str) -> Optional[str]

    • Retrieves a specific result by placeholder

Prompt Format

Each prompt in the chain can be defined as a dictionary:

{
    "prompt": str,           # The actual prompt text
    "output_format": str,    # "JSON", "MARKDOWN", "CSV", or "TEXT"
    "output_placeholder": str # Identifier for accessing this result
}

Error Handling

The library includes comprehensive error handling:

  • Template validation
  • API error handling
  • Placeholder validation

Errors are raised with descriptive messages indicating the specific issue and prompt number where the error occurred.

Best Practices

  1. Always set templates before executing the chain
  2. Use meaningful placeholder names
  3. Handle streaming responses appropriately
  4. Consider temperature settings based on your use case
  5. Use system prompts for consistent context

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tasks_prompts_chain-0.0.1.tar.gz (10.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tasks_prompts_chain-0.0.1-py3-none-any.whl (9.9 kB view details)

Uploaded Python 3

File details

Details for the file tasks_prompts_chain-0.0.1.tar.gz.

File metadata

  • Download URL: tasks_prompts_chain-0.0.1.tar.gz
  • Upload date:
  • Size: 10.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for tasks_prompts_chain-0.0.1.tar.gz
Algorithm Hash digest
SHA256 19a10cc0462b448013605992ec4ef02230972a24737c898ea5264b839dfed24f
MD5 3810ef511f2d767cb21de37559461d0f
BLAKE2b-256 b1e7918470cb4d096bccf833e914a8ea063de6c4a06334ee29e1f7991088641e

See more details on using hashes here.

File details

Details for the file tasks_prompts_chain-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for tasks_prompts_chain-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e82f9cb604e6bd159bedb833113c300e66c382e4f27e7c686f9b66a58488e1c3
MD5 a1945b88331de60b075940fa2d86f641
BLAKE2b-256 9e4ac267b7bbc06036a2cc5365b5e541f325d8d1fb77f5fd21d5f6a2d0ec1519

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page