A Python library for creating and executing chains of prompts using multiple LLM providers with streaming support and template formatting.
Project description
TasksPromptsChain
A Mini Python library for creating and executing chains of prompts using multiple LLM providers with streaming support and output template formatting.
Features
- Sequential prompt chain execution
- Streaming responses
- Template-based output formatting
- System prompt support
- Placeholder replacement between prompts
- Multiple output formats (JSON, Markdown, CSV, Text)
- Async/await support
- Support for multiple LLM providers (OpenAI, Anthropic, Cerebras, etc.)
- Multi-model support - use different models for different prompts in the chain
Dependencies
Please install typing-extensions and the SDK for your preferred LLM providers:
For OpenAI:
pip install typing-extensions
pip install openai
For Anthropic:
pip install typing-extensions
pip install anthropic
For Cerebras:
pip install typing-extensions
pip install cerebras
To Install the library:
pip install tasks_prompts_chain
Installation from source code
For Users required from source gitHub repo
pip install -r requirements/requirements.txt
For Developers required from source gitHub repo
pip install -r requirements/requirements.txt
pip install -r requirements/requirements-dev.txt
Quick Start
from tasks_prompts_chain import TasksPromptsChain
from openai import AsyncOpenAI
from anthropic import AsyncAnthropic
from cerebras import AsyncCerebras
async def main():
# Initialize the chain with multiple LLM configurations
llm_configs = [
{
"llm_id": "gpt", # Unique identifier for this LLM
"llm_class": AsyncOpenAI, # LLM SDK class
"model_options": {
"model": "gpt-4o",
"api_key": "your-openai-api-key",
"temperature": 0.1,
"max_tokens": 4120,
}
},
{
"llm_id": "claude", # Unique identifier for this LLM
"llm_class": AsyncAnthropic, # LLM SDK class
"model_options": {
"model": "claude-3-sonnet-20240229",
"api_key": "your-anthropic-api-key",
"temperature": 0.1,
"max_tokens": 8192,
}
},
{
"llm_id": "llama", # Unique identifier for this LLM
"llm_class": AsyncCerebras, # LLM SDK class
"model_options": {
"model": "llama-3.3-70b",
"api_key": "your-cerebras-api-key",
"base_url": "https://api.cerebras.ai/v1",
"temperature": 0.1,
"max_tokens": 4120,
}
}
]
chain = TasksPromptsChain(
llm_configs,
final_result_placeholder="design_result"
)
# Define your prompts - specify which LLM to use for each prompt
prompts = [
{
"prompt": "Create a design concept for a luxury chocolate bar",
"output_format": "TEXT",
"output_placeholder": "design_concept",
"llm_id": "gpt" # Use the GPT model for this prompt
},
{
"prompt": "Based on this concept: {{design_concept}}, suggest a color palette",
"output_format": "JSON",
"output_placeholder": "color_palette",
"llm_id": "claude" # Use the Claude model for this prompt
},
{
"prompt": "Based on the design and colors: {{design_concept}} and {{color_palette}}, suggest packaging materials",
"output_format": "MARKDOWN",
"output_placeholder": "packaging",
"llm_id": "llama" # Use the Cerebras model for this prompt
}
]
# Stream the responses
async for chunk in chain.execute_chain(prompts):
print(chunk, end="", flush=True)
# Get specific results
design = chain.get_result("design_concept")
colors = chain.get_result("color_palette")
packaging = chain.get_result("packaging")
Advanced Usage
Using System Prompts
chain = TasksPromptsChain(
llm_configs=[
{
"llm_id": "default_model",
"llm_class": AsyncOpenAI,
"model_options": {
"model": "gpt-4o",
"api_key": "your-openai-api-key",
"temperature": 0.1,
"max_tokens": 4120,
}
}
],
final_result_placeholder="result",
system_prompt="You are a professional design expert specialized in luxury products",
system_apply_to_all_prompts=True
)
Using Cerebras Models
from cerebras import AsyncCerebras
llm_configs = [
{
"llm_id": "cerebras",
"llm_class": AsyncCerebras,
"model_options": {
"model": "llama-3.3-70b",
"api_key": "your-cerebras-api-key",
"base_url": "https://api.cerebras.ai/v1",
"temperature": 0.1,
"max_tokens": 4120,
}
}
]
chain = TasksPromptsChain(
llm_configs,
final_result_placeholder="result",
)
Custom API Endpoint
llm_configs = [
{
"llm_id": "custom_endpoint",
"llm_class": AsyncOpenAI,
"model_options": {
"model": "your-custom-model",
"api_key": "your-api-key",
"base_url": "https://your-custom-endpoint.com/v1",
"temperature": 0.1,
"max_tokens": 4120,
}
}
]
chain = TasksPromptsChain(
llm_configs,
final_result_placeholder="result",
)
Using Templates
You must call this set method before the execution of the prompting query (chain.execute_chain(prompts))
# Set output template before execution
chain.template_output("""
<result>
<design>
### Design Concept:
{{design_concept}}
</design>
<colors>
### Color Palette:
{{color_palette}}
</colors>
</result>
""")
then retrieves the final result within the template :
# print out the final result in the well formated template
print(chain.get_final_result_within_template())
API Reference
TasksPromptsChain Class
Constructor Parameters
llm_configs(List[Dict]): List of LLM configurations, each containing:llm_id(str): Unique identifier for this LLM configurationllm_class: The LLM class to use (e.g.,AsyncOpenAI,AsyncAnthropic,AsyncCerebras)model_options(Dict): Configuration for the LLM:model(str): The model identifierapi_key(str): Your API key for the LLM providertemperature(float): Temperature setting for response generationmax_tokens(int): Maximum tokens in generated responsesbase_url(Optional[str]): Custom API endpoint URL
system_prompt(Optional[str]): System prompt for contextfinal_result_placeholder(str): Name for the final result placeholdersystem_apply_to_all_prompts(Optional[bool]): Apply system prompt to all prompts
Methods
-
execute_chain(prompts: List[Dict], streamout: bool = True) -> AsyncGenerator[str, None]- Executes the prompt chain and streams responses
-
template_output(template: str) -> None- Sets the output template format
-
get_final_result_within_template(self) -> Optional[str]- Retrieves the final query result with the defined template in template_output();
-
get_result(placeholder: str) -> Optional[str]- Retrieves a specific result by placeholder
Prompt Format
Each prompt in the chain can be defined as a dictionary:
{
"prompt": str, # The actual prompt text
"output_format": str, # "JSON", "MARKDOWN", "CSV", or "TEXT"
"output_placeholder": str, # Identifier for accessing this result
"llm_id": str # Optional: ID of the LLM to use for this prompt
}
Supported LLM Providers
TasksPromptsChain currently supports the following LLM providers:
- OpenAI - via
AsyncOpenAIfrom theopenaipackage - Anthropic - via
AsyncAnthropicfrom theanthropicpackage - Cerebras - via
AsyncCerebrasfrom thecerebraspackage
Each provider has different capabilities and models. The library adapts the API calls to work with each provider's specific requirements.
Error Handling
The library includes comprehensive error handling:
- Template validation
- API error handling
- Placeholder validation
- LLM validation (checks if specified LLM ID exists)
Errors are raised with descriptive messages indicating the specific issue and prompt number where the error occurred.
Best Practices
- Always set templates before executing the chain
- Use meaningful placeholder names
- Handle streaming responses appropriately
- Choose appropriate models for different types of tasks
- Use system prompts for consistent context
- Select the best provider for specific tasks:
- OpenAI is great for general purpose applications
- Anthropic (Claude) excels at longer contexts and complex reasoning
- Cerebras is excellent for high-performance AI tasks
How You Can Get Involved
✅ Try out tasks_prompts_chain: Give our software a try in your own setup and let us know how it goes - your experience helps us improve!
✅ Find a bug: Found something that doesn't work quite right? We'd appreciate your help in documenting it so we can fix it together.
✅ Fixing Bugs: Even small code contributions make a big difference! Pick an issue that interests you and share your solution with us.
✅ Share your thoughts: Have an idea that would make this project more useful? We're excited to hear your thoughts and explore new possibilities together!
Your contributions, big or small, truly matter to us. We're grateful for any help you can provide and look forward to welcoming you to our community!
Developer Contribution Workflow
- Fork the Repository: Create your own copy of the project by clicking the "Fork" button on our GitHub repository.
- Clone Your Fork:
git clone git@github.com:<your-username>/tasks_prompts_chain.git
cd tasks_prompts_chain/
- Set Up Development Environment
# Create and activate a virtual environment
python3 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install development dependencies
pip install -r requirements/requirements-dev.txt
- Stay Updated
# Add the upstream repository
git remote add upstream https://github.com/original-owner/tasks_prompts_chain.git
# Fetch latest changes from upstream
git fetch upstream
git merge upstream/main
Making Changes
- Create a Feature Branch
git checkout -b feature/your-feature-name
# or
git checkout -b bugfix/issue-you-are-fixing
-
Implement Your Changes
- Write tests for your changes when applicable
- Ensure existing tests pass with pytest
- Follow our code style guidelines
-
Commit Your Changes
git add .
git commit -m "Your descriptive commit message"
- Push to Your Fork
git push origin feature/your-feature-name
- Create a Pull Request
- Code Review Process
- Maintainers will review your PR
- Address any requested changes
- Once approved, your contribution will be merged!
Release Notes
0.1.0 - Breaking Changes
- Complete API redesign: The constructor now requires a list of LLM configurations instead of a single LLM class
- Multi-model support: Use different models for different prompts in the chain
- Constructor changes: Replaced
AsyncLLmAiandmodel_optionswithllm_configs - New provider support: Added official support for Cerebras models
- Removed dependencies: No longer directly depends on OpenAI SDK
- Prompt configuration: Added
llm_idfield to prompt dictionaries to specify which LLM to use
Users upgrading from version 0.0.x will need to modify their code to use the new API structure.
License
MIT License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tasks_prompts_chain-0.1.0.tar.gz.
File metadata
- Download URL: tasks_prompts_chain-0.1.0.tar.gz
- Upload date:
- Size: 14.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5737921fc5d380eac285318831d7f5be90903109f753ce4fa97ab24c1f37d48c
|
|
| MD5 |
2cba3c1a47528f09219feadf25defe47
|
|
| BLAKE2b-256 |
8c7c1e9d430dc8235f2f0a06272e3b91209f034258507f7ff464c3dcd3731db3
|
Provenance
The following attestation bundles were made for tasks_prompts_chain-0.1.0.tar.gz:
Publisher:
publish.yml on smirfolio/tasks_prompts_chain
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
tasks_prompts_chain-0.1.0.tar.gz -
Subject digest:
5737921fc5d380eac285318831d7f5be90903109f753ce4fa97ab24c1f37d48c - Sigstore transparency entry: 193104754
- Sigstore integration time:
-
Permalink:
smirfolio/tasks_prompts_chain@1fc716dfbf22b10314b67deaba67da1c9bbfa998 -
Branch / Tag:
refs/tags/0.1.0 - Owner: https://github.com/smirfolio
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@1fc716dfbf22b10314b67deaba67da1c9bbfa998 -
Trigger Event:
release
-
Statement type:
File details
Details for the file tasks_prompts_chain-0.1.0-py3-none-any.whl.
File metadata
- Download URL: tasks_prompts_chain-0.1.0-py3-none-any.whl
- Upload date:
- Size: 14.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6354727cce4c32411db2d5a9017d2e0279e30cf68f6d14fabb3d446875e5e7f2
|
|
| MD5 |
95a60532f83e4d532598469dd45bf4bc
|
|
| BLAKE2b-256 |
a06b9bfb77803e9857f786a58abea51167e104fd203146962f2f9a7196c00624
|
Provenance
The following attestation bundles were made for tasks_prompts_chain-0.1.0-py3-none-any.whl:
Publisher:
publish.yml on smirfolio/tasks_prompts_chain
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
tasks_prompts_chain-0.1.0-py3-none-any.whl -
Subject digest:
6354727cce4c32411db2d5a9017d2e0279e30cf68f6d14fabb3d446875e5e7f2 - Sigstore transparency entry: 193104756
- Sigstore integration time:
-
Permalink:
smirfolio/tasks_prompts_chain@1fc716dfbf22b10314b67deaba67da1c9bbfa998 -
Branch / Tag:
refs/tags/0.1.0 - Owner: https://github.com/smirfolio
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@1fc716dfbf22b10314b67deaba67da1c9bbfa998 -
Trigger Event:
release
-
Statement type: