Skip to main content

A Python library for creating and executing chains of prompts using OpenAI's SDK with streaming support and template formatting.

Project description

TasksPromptsChain

A Mini Python library for creating and executing chains of prompts using OpenAI's API with streaming support and output template formatting.

Features

  • Sequential prompt chain execution
  • Streaming responses
  • Template-based output formatting
  • System prompt support
  • Placeholder replacement between prompts
  • Multiple output formats (JSON, Markdown, CSV, Text)
  • Async/await support

Dependancies

Please install typing-extensions and openai python packages

pip install typing-extensions
pip install openai

To Install the library:

pip install tasks_prompts_chain

Installation from source code

For Users required from source gitHub repo

pip install -r requirements/requirements.txt

For Developers required from source gitHub repo

pip install -r requirements/requirements.txt
pip install -r requirements/requirements-dev.txt

Quick Start

from tasks_prompts_chain import TasksPromptsChain

async def main():
    # Initialize the chain
    chain = TasksPromptsChain(
        model="gpt-3.5-turbo",
        api_key="your-api-key",
        final_result_placeholder="design_result"
    )

    # Define your prompts
    prompts = [
        {
            "prompt": "Create a design concept for a luxury chocolate bar",
            "output_format": "TEXT",
            "output_placeholder": "design_concept"
        },
        {
            "prompt": "Based on this concept: {{design_concept}}, suggest a color palette",
            "output_format": "JSON",
            "output_placeholder": "color_palette"
        }
    ]

    # Stream the responses
    async for chunk in chain.execute_chain(prompts):
        print(chunk, end="", flush=True)

    # Get specific results
    design = chain.get_result("design_concept")
    colors = chain.get_result("color_palette")

Advanced Usage

Using System Prompts

chain = TasksPromptsChain(
    model="gpt-3.5-turbo",
    api_key="your-api-key",
    final_result_placeholder="result",
    system_prompt="You are a professional design expert specialized in luxury products",
    system_apply_to_all_prompts=True
)

Custom API Endpoint

chain = TasksPromptsChain(
    model="gpt-3.5-turbo",
    api_key="your-api-key",
    final_result_placeholder="result",
    base_url="https://your-custom-endpoint.com/v1"
)

Using Templates

You must call this set method befor the excution of the prompting query (chain.execute_chain(prompts))

# Set output template before execution
chain.template_output("""
<result>
    <design>
    ### Design Concept:
    {{design_concept}}
    </design>
    
    <colors>
    ### Color Palette:
    {{color_palette}}
    </colors>
</result>
""")

then retrieves the final result within the template :

# print out the final result in the well formated template
print(chain.get_final_result_within_template())

API Reference

TasksPromptsChain Class

Constructor Parameters

  • model (str): The model identifier (e.g., 'gpt-3.5-turbo')
  • api_key (str): Your OpenAI API key
  • final_result_placeholder (str): Name for the final result placeholder
  • system_prompt (Optional[str]): System prompt for context
  • system_apply_to_all_prompts (Optional[bool]): Apply system prompt to all prompts
  • base_url (Optional[str]): Custom API endpoint URL

Methods

  • execute_chain(prompts: List[Dict], temperature: float = 0.7) -> AsyncGenerator[str, None]

    • Executes the prompt chain and streams responses
  • template_output(template: str) -> None

    • Sets the output template format
  • get_final_result_within_template(self) -> Optional[str]

    • Retrieves the final query result with the defined template in template_output();
  • get_result(placeholder: str) -> Optional[str]

    • Retrieves a specific result by placeholder

Prompt Format

Each prompt in the chain can be defined as a dictionary:

{
    "prompt": str,           # The actual prompt text
    "output_format": str,    # "JSON", "MARKDOWN", "CSV", or "TEXT"
    "output_placeholder": str # Identifier for accessing this result
}

Error Handling

The library includes comprehensive error handling:

  • Template validation
  • API error handling
  • Placeholder validation

Errors are raised with descriptive messages indicating the specific issue and prompt number where the error occurred.

Best Practices

  1. Always set templates before executing the chain
  2. Use meaningful placeholder names
  3. Handle streaming responses appropriately
  4. Consider temperature settings based on your use case
  5. Use system prompts for consistent context

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tasks_prompts_chain-0.0.4.tar.gz (10.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tasks_prompts_chain-0.0.4-py3-none-any.whl (9.9 kB view details)

Uploaded Python 3

File details

Details for the file tasks_prompts_chain-0.0.4.tar.gz.

File metadata

  • Download URL: tasks_prompts_chain-0.0.4.tar.gz
  • Upload date:
  • Size: 10.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for tasks_prompts_chain-0.0.4.tar.gz
Algorithm Hash digest
SHA256 651c84e04a23ab943f7c91a6de5c904db3b4d446b6ede3c00b698e0c614f1861
MD5 f9e5474ba7e573376a85fd62d173910b
BLAKE2b-256 a3671a7a49c44b25cdde13e19b9f3deb57c8cfdcda9ed6531b3144ac0cc617c7

See more details on using hashes here.

Provenance

The following attestation bundles were made for tasks_prompts_chain-0.0.4.tar.gz:

Publisher: publish.yml on smirfolio/tasks_prompts_chain

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tasks_prompts_chain-0.0.4-py3-none-any.whl.

File metadata

File hashes

Hashes for tasks_prompts_chain-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 1c09809724b2e33a91669b4cb2f208f11845177adc4eaf997a73a42172645f69
MD5 2ec695d09c78e5b4afdf1ae2d49419c4
BLAKE2b-256 481ee4d3df3a8550de4819c2503b5f5f3ce5f8f64a3c9d2b7eb21b6fd71031a0

See more details on using hashes here.

Provenance

The following attestation bundles were made for tasks_prompts_chain-0.0.4-py3-none-any.whl:

Publisher: publish.yml on smirfolio/tasks_prompts_chain

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page