Skip to main content

A Python library for creating and executing chains of prompts using OpenAI's SDK with streaming support and template formatting.

Project description

TasksPromptsChain

A Mini Python library for creating and executing chains of prompts using OpenAI's API with streaming support and output template formatting.

Features

  • Sequential prompt chain execution
  • Streaming responses
  • Template-based output formatting
  • System prompt support
  • Placeholder replacement between prompts
  • Multiple output formats (JSON, Markdown, CSV, Text)
  • Async/await support

Dependancies

Please install typing-extensions and openai python packages

pip install typing-extensions
pip install openai

To Install the library:

pip install tasks_prompts_chain

Installation from source code

For Users required from source gitHub repo

pip install -r requirements/requirements.txt

For Developers required from source gitHub repo

pip install -r requirements/requirements.txt
pip install -r requirements/requirements-dev.txt

Quick Start

from tasks_prompts_chain import TasksPromptsChain

async def main():
    # Initialize the chain
    chain = TasksPromptsChain(
        model="gpt-3.5-turbo",
        api_key="your-api-key",
        final_result_placeholder="design_result"
    )

    # Define your prompts
    prompts = [
        {
            "prompt": "Create a design concept for a luxury chocolate bar",
            "output_format": "TEXT",
            "output_placeholder": "design_concept"
        },
        {
            "prompt": "Based on this concept: {{design_concept}}, suggest a color palette",
            "output_format": "JSON",
            "output_placeholder": "color_palette"
        }
    ]

    # Stream the responses
    async for chunk in chain.execute_chain(prompts):
        print(chunk, end="", flush=True)

    # Get specific results
    design = chain.get_result("design_concept")
    colors = chain.get_result("color_palette")

Advanced Usage

Using Templates

# Set output template before execution
chain.template_output("""
<result>
    <design>
    ### Design Concept:
    {{design_concept}}
    </design>
    
    <colors>
    ### Color Palette:
    {{color_palette}}
    </colors>
</result>
""")

Using System Prompts

chain = TasksPromptsChain(
    model="gpt-3.5-turbo",
    api_key="your-api-key",
    final_result_placeholder="result",
    system_prompt="You are a professional design expert specialized in luxury products",
    system_apply_to_all_prompts=True
)

Custom API Endpoint

chain = TasksPromptsChain(
    model="gpt-3.5-turbo",
    api_key="your-api-key",
    final_result_placeholder="result",
    base_url="https://your-custom-endpoint.com/v1"
)

API Reference

TasksPromptsChain Class

Constructor Parameters

  • model (str): The model identifier (e.g., 'gpt-3.5-turbo')
  • api_key (str): Your OpenAI API key
  • final_result_placeholder (str): Name for the final result placeholder
  • system_prompt (Optional[str]): System prompt for context
  • system_apply_to_all_prompts (Optional[bool]): Apply system prompt to all prompts
  • base_url (Optional[str]): Custom API endpoint URL

Methods

  • execute_chain(prompts: List[Dict], temperature: float = 0.7) -> AsyncGenerator[str, None]

    • Executes the prompt chain and streams responses
  • template_output(template: str) -> None

    • Sets the output template format
  • get_result(placeholder: str) -> Optional[str]

    • Retrieves a specific result by placeholder

Prompt Format

Each prompt in the chain can be defined as a dictionary:

{
    "prompt": str,           # The actual prompt text
    "output_format": str,    # "JSON", "MARKDOWN", "CSV", or "TEXT"
    "output_placeholder": str # Identifier for accessing this result
}

Error Handling

The library includes comprehensive error handling:

  • Template validation
  • API error handling
  • Placeholder validation

Errors are raised with descriptive messages indicating the specific issue and prompt number where the error occurred.

Best Practices

  1. Always set templates before executing the chain
  2. Use meaningful placeholder names
  3. Handle streaming responses appropriately
  4. Consider temperature settings based on your use case
  5. Use system prompts for consistent context

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tasks_prompts_chain-0.0.3.tar.gz (10.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tasks_prompts_chain-0.0.3-py3-none-any.whl (10.1 kB view details)

Uploaded Python 3

File details

Details for the file tasks_prompts_chain-0.0.3.tar.gz.

File metadata

  • Download URL: tasks_prompts_chain-0.0.3.tar.gz
  • Upload date:
  • Size: 10.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for tasks_prompts_chain-0.0.3.tar.gz
Algorithm Hash digest
SHA256 7cd3b550169978b084d4843a5af1f08569f8363851aafc82f2e7878edcb3eb08
MD5 97360dcc222bdbcd2928542e9c72692a
BLAKE2b-256 de336b88bff622a86fe48e78a0dac6cd2b14e7687113b8ab349585c3839d46b8

See more details on using hashes here.

Provenance

The following attestation bundles were made for tasks_prompts_chain-0.0.3.tar.gz:

Publisher: publish.yml on smirfolio/tasks_prompts_chain

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tasks_prompts_chain-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for tasks_prompts_chain-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a27dda8cb2fc566dd7ecbe63fdc6e0144b22b80ba5cecf2d57e34b86eac72243
MD5 1cf5d944afe4942e414b620d120a977c
BLAKE2b-256 48d1b77e9b15f863d2e36ccbcb3575e1871576f7f5bfa56fa7e9637dddad0e0b

See more details on using hashes here.

Provenance

The following attestation bundles were made for tasks_prompts_chain-0.0.3-py3-none-any.whl:

Publisher: publish.yml on smirfolio/tasks_prompts_chain

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page