Prompt Template Framework — 5-layer Jinja2 template engine for LLM applications
Project description
promptfw — Prompt Template Framework
5-layer Jinja2 template engine for LLM applications.
Installation
pip install promptfw
# With accurate token counting:
pip install promptfw[tiktoken]
# All optional dependencies:
pip install promptfw[all]
Quick Start
from promptfw import PromptStack, PromptTemplate, TemplateLayer
stack = PromptStack()
stack.register(PromptTemplate(
id="story.task.write",
layer=TemplateLayer.TASK,
template="Write a {{ genre }} story about {{ topic }} in {{ words }} words.",
variables=["genre", "topic", "words"],
))
rendered = stack.render("story.task.write", {
"genre": "fantasy",
"topic": "a dragon who learns to code",
"words": 500,
})
# rendered.system → system prompt (SYSTEM + FORMAT layers)
# rendered.user → user prompt (CONTEXT* + TASK layers)
5-Layer Stack
SYSTEM → Role & base behaviour (stable, cacheable)
FORMAT → Format-specific rules (stable, cacheable)
CONTEXT → Generic runtime context (dynamic)
CONTEXT_PROJECT → Project-level context (dynamic, v0.5.0)
CONTEXT_CHAPTER → Chapter-level context (dynamic, v0.5.0)
CONTEXT_SCENE → Scene-level context (dynamic, v0.5.0)
TASK → Concrete task (dynamic)
FEW_SHOT → Examples (appended last)
rendered = stack.render_stack(
["system.base", "format.roman", "context.project", "context.scene", "task.write_scene"],
context={
"role": "professional author",
"project": "The Iron Throne",
"current_scene": "The forest at night",
"characters": "Alice, Bob",
}
)
Load Templates from YAML
# templates/story/task/write_scene.yaml
id: story.task.write_scene
layer: task
template: |
Write scene {{ scene_id }}: {{ scene_description }}
Characters: {{ characters }}
stack = PromptStack.from_directory("templates/")
rendered = stack.render("story.task.write_scene", context)
LiteLLM / OpenAI Direct Output
# render_to_messages() returns [{"role": ..., "content": ...}, ...] directly
messages = stack.render_to_messages(
["system.base", "few_shot.examples", "task.write"],
context={...},
)
# Pass directly to litellm.completion() or openai.chat.completions.create()
Format-Aware Filtering
# for_format() returns a new stack with only matching templates.
# Templates with format_type=None are always included (format-agnostic).
stack = get_planning_stack()
messages = stack.for_format("academic").render_to_messages(
["planning.system", "planning.task.premise"],
context={...},
)
Fallback Chains
# render_with_fallback() tries patterns in order; first match wins.
result = stack.render_with_fallback(
[
"writing.task.write_chapter.roman",
"writing.task.write_chapter",
"writing.task.default",
],
context={...},
)
# get_or_fallback() on the registry level:
template = registry.get_or_fallback([
"chapter_writer_v2",
"chapter_writer_v1",
"chapter_writer_default",
])
Wildcard Lookup & Version Pinning
# Wildcard
template = stack.registry.get("roman.*.scene_generation")
# Version-pinned
rendered = stack.render_stack(
["system.base@1.0.0", "format.roman", "task.write@2.1.0"],
context,
)
Markdown Response Parsing
from promptfw import extract_field, extract_json
# LLMs often respond with Markdown key:value text instead of JSON
text = "**Premise:** A blacksmith discovers magic.\n**Themes:** Identity, Power"
extract_field(text, "Premise") # → "A blacksmith discovers magic."
extract_field(text, "Themes") # → "Identity, Power"
extract_field(text, "Missing", default="") # → ""
# JSON extraction from fenced blocks or raw text
data = extract_json(llm_response) # dict | None
items = extract_json_list(llm_response) # list
data = extract_json_strict(llm_response) # dict or raises LLMResponseError
Token Estimation
# tokens_estimate is auto-calculated at construction time when tiktoken is installed
from promptfw import PromptTemplate, TemplateLayer
tmpl = PromptTemplate(
id="task.write",
layer=TemplateLayer.TASK,
template="Write a short story about {{ topic }}.",
)
print(tmpl.tokens_estimate) # e.g. 9 (via tiktoken cl100k_base)
Django Integration
from promptfw import DjangoTemplateRegistry
# Load from Django ORM queryset
registry = DjangoTemplateRegistry.from_queryset(
PromptTemplateModel.objects.filter(is_active=True)
)
stack = PromptStack(registry=registry)
Context Sub-Layers (v0.5.0)
For hierarchical document structures (book → chapter → scene):
from promptfw import TemplateLayer, USER_LAYERS
# Available sub-layers (rendered in this order within user prompt):
# CONTEXT → CONTEXT_PROJECT → CONTEXT_CHAPTER → CONTEXT_SCENE → TASK
stack.render_stack(
["sys", "context.project", "context.chapter", "context.scene", "task"],
context={...},
)
# render_stack auto-sorts to canonical order regardless of list order
Hot Reload (dev mode)
pip install promptfw[hotreload]
stack = PromptStack.from_directory("templates/")
stack.enable_hot_reload() # watches for YAML changes
Optional Dependencies
| Extra | Package | Feature |
|---|---|---|
tiktoken |
tiktoken≥0.6 | Accurate token counting |
hotreload |
watchdog≥4.0 | File-system hot reload |
django |
django≥4.2 | ORM queryset adapter |
all |
all above | Everything |
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
promptfw-0.5.1.tar.gz
(45.3 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
promptfw-0.5.1-py3-none-any.whl
(28.3 kB
view details)
File details
Details for the file promptfw-0.5.1.tar.gz.
File metadata
- Download URL: promptfw-0.5.1.tar.gz
- Upload date:
- Size: 45.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
df11b000a08603aa0cd7946ba37a46f0f9a725a9919c1a0ad9249dd945d40768
|
|
| MD5 |
3c84b9061a7a4b0e47a94fdfb9e21e8b
|
|
| BLAKE2b-256 |
2f821d852e6e0df5a75ea2f54b9c4913d405e57c2dbd063bae19ece52e140602
|
File details
Details for the file promptfw-0.5.1-py3-none-any.whl.
File metadata
- Download URL: promptfw-0.5.1-py3-none-any.whl
- Upload date:
- Size: 28.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
549046213a5d12e0bd15a6602516e2bf6007d50dfe2d83cfb4def83b12acc1b2
|
|
| MD5 |
05cb8fa91554a499513d0ceacce8fa2c
|
|
| BLAKE2b-256 |
b25c347caf6701b8cba23cf361d5b3fa39fcb29371c683063aeea968c0c4949d
|