Skip to main content

A Python library for creating and executing chains of prompts using OpenAI's SDK with streaming support and template formatting.

Project description

TasksPromptsChain

A Mini Python library for creating and executing chains of prompts using OpenAI's API with streaming support and output template formatting.

Features

  • Sequential prompt chain execution
  • Streaming responses
  • Template-based output formatting
  • System prompt support
  • Placeholder replacement between prompts
  • Multiple output formats (JSON, Markdown, CSV, Text)
  • Async/await support

Dependancies

Please install typing-extensions and openai python packages

pip install typing-extensions
pip install openai

Installation from source code

For Users required from source gitHub repo

pip install -r requirements/requirements.txt

For Developers required from source gitHub repo

pip install -r requirements/requirements.txt
pip install -r requirements/requirements-dev.txt

Quick Start

from tasks_prompts_chain import TasksPromptsChain

async def main():
    # Initialize the chain
    chain = TasksPromptsChain(
        model="gpt-3.5-turbo",
        api_key="your-api-key",
        final_result_placeholder="design_result"
    )

    # Define your prompts
    prompts = [
        {
            "prompt": "Create a design concept for a luxury chocolate bar",
            "output_format": "TEXT",
            "output_placeholder": "design_concept"
        },
        {
            "prompt": "Based on this concept: {{design_concept}}, suggest a color palette",
            "output_format": "JSON",
            "output_placeholder": "color_palette"
        }
    ]

    # Stream the responses
    async for chunk in chain.execute_chain(prompts):
        print(chunk, end="", flush=True)

    # Get specific results
    design = chain.get_result("design_concept")
    colors = chain.get_result("color_palette")

Advanced Usage

Using Templates

# Set output template before execution
chain.template_output("""
<result>
    <design>
    ### Design Concept:
    {{design_concept}}
    </design>
    
    <colors>
    ### Color Palette:
    {{color_palette}}
    </colors>
</result>
""")

Using System Prompts

chain = TasksPromptsChain(
    model="gpt-3.5-turbo",
    api_key="your-api-key",
    final_result_placeholder="result",
    system_prompt="You are a professional design expert specialized in luxury products",
    system_apply_to_all_prompts=True
)

Custom API Endpoint

chain = TasksPromptsChain(
    model="gpt-3.5-turbo",
    api_key="your-api-key",
    final_result_placeholder="result",
    base_url="https://your-custom-endpoint.com/v1"
)

API Reference

TasksPromptsChain Class

Constructor Parameters

  • model (str): The model identifier (e.g., 'gpt-3.5-turbo')
  • api_key (str): Your OpenAI API key
  • final_result_placeholder (str): Name for the final result placeholder
  • system_prompt (Optional[str]): System prompt for context
  • system_apply_to_all_prompts (Optional[bool]): Apply system prompt to all prompts
  • base_url (Optional[str]): Custom API endpoint URL

Methods

  • execute_chain(prompts: List[Dict], temperature: float = 0.7) -> AsyncGenerator[str, None]

    • Executes the prompt chain and streams responses
  • template_output(template: str) -> None

    • Sets the output template format
  • get_result(placeholder: str) -> Optional[str]

    • Retrieves a specific result by placeholder

Prompt Format

Each prompt in the chain can be defined as a dictionary:

{
    "prompt": str,           # The actual prompt text
    "output_format": str,    # "JSON", "MARKDOWN", "CSV", or "TEXT"
    "output_placeholder": str # Identifier for accessing this result
}

Error Handling

The library includes comprehensive error handling:

  • Template validation
  • API error handling
  • Placeholder validation

Errors are raised with descriptive messages indicating the specific issue and prompt number where the error occurred.

Best Practices

  1. Always set templates before executing the chain
  2. Use meaningful placeholder names
  3. Handle streaming responses appropriately
  4. Consider temperature settings based on your use case
  5. Use system prompts for consistent context

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tasks_prompts_chain-0.0.2.tar.gz (10.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tasks_prompts_chain-0.0.2-py3-none-any.whl (10.0 kB view details)

Uploaded Python 3

File details

Details for the file tasks_prompts_chain-0.0.2.tar.gz.

File metadata

  • Download URL: tasks_prompts_chain-0.0.2.tar.gz
  • Upload date:
  • Size: 10.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for tasks_prompts_chain-0.0.2.tar.gz
Algorithm Hash digest
SHA256 11260c7b5c0e36ab4b7c93c2dacc1b1f64a717f3f714392ed3ac8bced3c4a0bb
MD5 8ecfc27f2c510f1090f1232674ab6552
BLAKE2b-256 e81a3f0ecd04aac86f81ab2594e9ad7d51288c375e786749304f190f52faf593

See more details on using hashes here.

Provenance

The following attestation bundles were made for tasks_prompts_chain-0.0.2.tar.gz:

Publisher: publish.yml on smirfolio/tasks_prompts_chain

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tasks_prompts_chain-0.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for tasks_prompts_chain-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 3bda7a7f9bb9c1ac978421baa84a7591113a61279d7b32b15fd4af607d975147
MD5 5ba2be47d8892c24bb9f1c5a36598539
BLAKE2b-256 6edf15fc423f5bfb013e8d884120df28cb63986564e561e65d0d7c90798800e8

See more details on using hashes here.

Provenance

The following attestation bundles were made for tasks_prompts_chain-0.0.2-py3-none-any.whl:

Publisher: publish.yml on smirfolio/tasks_prompts_chain

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page