A lightweight and extensible Python library for managing, versioning, and composing reusable prompt templates from YAML or text files.
Project description
Prompt Manager
A robust Python utility for loading, versioning, and managing prompt templates from YAML files or flat file directories.
Prompt Manager is designed to solve the problem of managing a large, complex, and evolving library of prompts. It allows you to compose prompts from reusable parts, manage versions (e.g., for A/B testing or model-specific tuning), and safely inject variables at runtime.
📚 Table of Contents
🚀 Core Features
- Centralized Prompt Management: Load all prompts from a single structured YAML file.
- Fallback Directory Loading: Optionally load simple prompts from a directory of
.txtfiles. - Prompt Versioning: Native support for versions (e.g.,
v1,v2) with a_defaultkey. - Recursive Composition: Build complex prompts from smaller, reusable components using
{include ...}directives. - Safe Variable Substitution: Format prompts with
{variable}placeholders using a strict or non-strict API. - Partial Formatting: Fill variables incrementally for chain-of-thought or multi-step prompt sequences.
- Configuration Validation: Validate prompt configurations at load time to catch errors early.
💾 Installation
pip install prompt_template_manager
⚡ Quick Start
1. Create your prompts.yaml file
# prompts.yaml
system_persona:
_default: v2
_meta:
description: "The standard AI persona."
v1: "You are a helpful AI."
v2: "You are a helpful, concise, and polite AI assistant."
summarize_task:
_default: v1
v1: |
{include system_persona}
Your task is to summarize the following document into {num_sentences} sentences.
Document: {text}
Your summary:
2. Use PromptManager in your Python code
from prompt_template_manager import PromptManager
try:
manager = PromptManager('prompts.yaml')
prompt = manager.get('summarize_task')
my_data = {
"num_sentences": 3,
"text": "The quick brown fox jumps over the lazy dog."
}
formatted_prompt = prompt.format(my_data)
print(formatted_prompt)
except FileNotFoundError:
print("Error: prompts.yaml not found.")
except (ValueError, KeyError) as e:
print(f"Error loading or getting prompt: {e}")
Output:
You are a helpful, concise, and polite AI assistant.
Your task is to summarize the following document into 3 sentences.
Document: The quick brown fox jumps over the lazy dog.
Your summary:
🧩 YAML Configuration Schema
The PromptManager supports two main structures for defining prompts.
1. Versioned Prompts (Recommended)
A “versioned prompt” is a dictionary containing:
_default(Required): Default version key (e.g.,v1).v{number}(Required): Version key(s) with prompt text._meta(Optional): Arbitrary metadata.
Example:
system_persona_default:
_default: v2
_meta:
description: "The standard AI persona."
author: "Admin"
v1: "You are a helpful AI."
v2: "You are a helpful, concise, and polite AI assistant. You always answer the user's question directly."
2. Simple Prompts (Non-versioned)
For simple, static prompts:
simple_greeting: "Hello, {name}. This is a simple, non-versioned prompt."
⚠️ Not recommended for complex systems — lacks versioning, metadata, and include support.
3. Include Directives – {include prompt_name}
Compose prompts from other prompts:
system_persona:
_default: v1
v1: "You are a helpful AI."
output_format_json:
_default: v1
v1: "Your response MUST be a single, valid JSON object."
generate_user_profile:
_default: v1
_meta:
description: "Generates a JSON profile from a user's bio."
v1: |
{include system_persona}
{include output_format_json}
Analyze the following user bio and generate a profile.
Bio: {bio_text}
Circular references (e.g., A includes B and B includes A) raise an ImportError.
4. Variable Placeholders – {variable_name}
- Any text enclosed in
{}is treated as a variable. - Variables are filled using
.format()or.partial(). - Variable names are validated on load.
📁 Loading from a Directory
You can also load prompts from a directory:
my_txt_prompts/
├── greet.txt
└── farewell.txt
manager = PromptManager('my_txt_prompts/')
prompt = manager.get('greet')
Notes:
- Only
.txtfiles are loaded. - Filenames become prompt names.
- Each prompt defaults to version
v1. - No support for metadata, includes, or multiple versions.
🧠 API Reference
Class: PromptManager
__init__(self, source_path: str | Path)
Loads and validates prompts from a YAML file or .txt directory.
Raises:
FileNotFoundError, ValueError, yaml.YAMLError
get(self, name: str, version: int | None = None) -> Prompt
Retrieves a fully resolved Prompt object.
Raises:
KeyError, ValueError, ImportError
Class: Prompt
format(self, data: dict, strict: bool = True) -> str
Substitutes all variables.
partial(self, data: dict) -> Prompt
Partially fills variables, returning a new Prompt.
get_raw_content(self) -> str
Returns resolved, unformatted prompt text.
get_variables(self) -> dict
Lists remaining variable placeholders.
get_meta(self) -> dict
Returns the _meta dictionary (if any).
⚙️ Advanced Usage Examples
Example 1: Version A/B Testing
manager = PromptManager('prompts.yaml')
prompt_v2 = manager.get('system_persona')
prompt_v1 = manager.get('system_persona', version=1)
print(f"Default: {prompt_v2.get_raw_content()}")
print(f"V1: {prompt_v1.get_raw_content()}")
Output:
Default: You are a helpful, concise, and polite AI assistant.
V1: You are a helpful AI.
Example 2: Partial Formatting (Chain-of-Thought)
manager = PromptManager('prompts.yaml')
prompt = manager.get('summarize_task')
print(f"Original variables: {prompt.get_variables().keys()}")
partial_prompt = prompt.partial({
"text": "This is a long document about the history of computing."
})
print(f"Partial variables: {partial_prompt.get_variables().keys()}")
print("--- Partial Content ---")
print(partial_prompt.get_raw_content())
final_prompt_str = partial_prompt.format({
"num_sentences": 2
})
print("\n--- Final Content ---")
print(final_prompt_str)
✅ Validation Rules
Prompt Names
- Must be valid Python identifiers (e.g.,
my_prompt,_internal_prompt).
Variable Names
- Must start with a letter.
- May include letters, numbers,
-, and_. - Cannot end with
-or_.
Include Names
- Follow the same rules as variable names.
Invalid configurations raise ValueError during load.
🧰 License
MIT License — Prompt Manager Contributors
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file prompt_template_manager-1.0.1.tar.gz.
File metadata
- Download URL: prompt_template_manager-1.0.1.tar.gz
- Upload date:
- Size: 16.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
289f3eff13640fb2d4731062a5b23216973ab8915d6fb3471595ba80c9ba1cc8
|
|
| MD5 |
5a3fadc72fde60af2770ef045be98b7a
|
|
| BLAKE2b-256 |
3734ca267ccfb2ac27fb9ceaee68da521ec3c3828172f47fb94abba0c2cd257d
|
File details
Details for the file prompt_template_manager-1.0.1-py3-none-any.whl.
File metadata
- Download URL: prompt_template_manager-1.0.1-py3-none-any.whl
- Upload date:
- Size: 14.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
afb03472f01d2d1c2b708cf02b7b6da031b5ec265cc5e6d87e68caad19209158
|
|
| MD5 |
b0e985cd51bb22f10f015226b7070ef0
|
|
| BLAKE2b-256 |
f3b273e3a2191d583f604f476d2950977331d49683db49b884a0b19aad4ba57c
|