Prompt Template Framework — 5-layer Jinja2 template engine for LLM applications
Project description
iil-promptfw — Prompt Template Framework
5-layer Jinja2 template engine for LLM applications.
Installation
pip install iil-promptfw
# With accurate token counting:
pip install "iil-promptfw[tiktoken]"
# All optional dependencies:
pip install "iil-promptfw[all]"
Quick Start
from promptfw import PromptStack, PromptTemplate, TemplateLayer
stack = PromptStack()
stack.register(PromptTemplate(
id="story.task.write",
layer=TemplateLayer.TASK,
template="Write a {{ genre }} story about {{ topic }} in {{ words }} words.",
variables=["genre", "topic", "words"],
))
rendered = stack.render("story.task.write", {
"genre": "fantasy",
"topic": "a dragon who learns to code",
"words": 500,
})
# rendered.system → system prompt (SYSTEM + FORMAT layers)
# rendered.user → user prompt (CONTEXT* + TASK layers)
5-Layer Stack
SYSTEM → Role & base behaviour (stable, cacheable)
FORMAT → Format-specific rules (stable, cacheable)
CONTEXT → Generic runtime context (dynamic)
CONTEXT_PROJECT → Project-level context (dynamic, v0.5.0)
CONTEXT_CHAPTER → Chapter-level context (dynamic, v0.5.0)
CONTEXT_SCENE → Scene-level context (dynamic, v0.5.0)
TASK → Concrete task (dynamic)
FEW_SHOT → Examples (appended last)
rendered = stack.render_stack(
["system.base", "format.roman", "context.project", "context.scene", "task.write_scene"],
context={
"role": "professional author",
"project": "The Iron Throne",
"current_scene": "The forest at night",
"characters": "Alice, Bob",
}
)
Load Templates from YAML
# templates/story/task/write_scene.yaml
id: story.task.write_scene
layer: task
template: |
Write scene {{ scene_id }}: {{ scene_description }}
Characters: {{ characters }}
stack = PromptStack.from_directory("templates/")
rendered = stack.render("story.task.write_scene", context)
LiteLLM / OpenAI Direct Output
# render_to_messages() returns [{"role": ..., "content": ...}, ...] directly
messages = stack.render_to_messages(
["system.base", "few_shot.examples", "task.write"],
context={...},
)
# Pass directly to litellm.completion() or openai.chat.completions.create()
Format-Aware Filtering
# for_format() returns a new stack with only matching templates.
# Templates with format_type=None are always included (format-agnostic).
stack = get_planning_stack()
messages = stack.for_format("academic").render_to_messages(
["planning.system", "planning.task.premise"],
context={...},
)
Fallback Chains
# render_with_fallback() tries patterns in order; first match wins.
result = stack.render_with_fallback(
[
"writing.task.write_chapter.roman",
"writing.task.write_chapter",
"writing.task.default",
],
context={...},
)
# get_or_fallback() on the registry level:
template = registry.get_or_fallback([
"chapter_writer_v2",
"chapter_writer_v1",
"chapter_writer_default",
])
Wildcard Lookup & Version Pinning
# Wildcard
template = stack.registry.get("roman.*.scene_generation")
# Version-pinned
rendered = stack.render_stack(
["system.base@1.0.0", "format.roman", "task.write@2.1.0"],
context,
)
Markdown Response Parsing (v0.5.0+)
Extract named fields from LLM Markdown responses. Handles **Field:**,
Field:, and ### Field patterns — including the common LLM style where
the colon appears inside the bold markers (**Field:** value).
from promptfw import extract_field, extract_json
text = "**Premise:** A blacksmith discovers magic.\n**Themes:** Identity, Power"
extract_field(text, "Premise") # → "A blacksmith discovers magic."
extract_field(text, "Themes") # → "Identity, Power"
extract_field(text, "Missing", default="") # → ""
# Plain colon style
text2 = "Title: The Lost City\nAuthor: Jane Doe"
extract_field(text2, "Title") # → "The Lost City"
extract_field(text2, "Author") # → "Jane Doe"
# JSON extraction from fenced blocks or raw text
data = extract_json(llm_response) # dict | None
items = extract_json_list(llm_response) # list
data = extract_json_strict(llm_response) # dict or raises LLMResponseError
v0.5.5: Fixed continuation-text slicing bug where the header line itself could leak into multi-field responses. All
**Field:**andField:patterns now extract correctly regardless of position in the text.
Token Estimation
# tokens_estimate is auto-calculated at construction time when tiktoken is installed
from promptfw import PromptTemplate, TemplateLayer
tmpl = PromptTemplate(
id="task.write",
layer=TemplateLayer.TASK,
template="Write a short story about {{ topic }}.",
)
print(tmpl.tokens_estimate) # e.g. 9 (via tiktoken cl100k_base)
Django Integration
from promptfw import DjangoTemplateRegistry
# Load from Django ORM queryset
registry = DjangoTemplateRegistry.from_queryset(
PromptTemplateModel.objects.filter(is_active=True)
)
stack = PromptStack(registry=registry)
Context Sub-Layers (v0.5.0)
For hierarchical document structures (book → chapter → scene):
from promptfw import TemplateLayer, USER_LAYERS
# Available sub-layers (rendered in this order within user prompt):
# CONTEXT → CONTEXT_PROJECT → CONTEXT_CHAPTER → CONTEXT_SCENE → TASK
stack.render_stack(
["sys", "context.project", "context.chapter", "context.scene", "task"],
context={...},
)
# render_stack auto-sorts to canonical order regardless of list order
Hot Reload (dev mode)
pip install "iil-promptfw[hotreload]"
stack = PromptStack.from_directory("templates/")
stack.enable_hot_reload() # watches for YAML changes
Optional Dependencies
| Extra | Package | Feature |
|---|---|---|
tiktoken |
tiktoken≥0.6 | Accurate token counting |
hotreload |
watchdog≥4.0 | File-system hot reload |
django |
django≥4.2 | ORM queryset adapter |
all |
all above | Everything |
Changelog
See CHANGELOG.md.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file iil_promptfw-0.8.0.tar.gz.
File metadata
- Download URL: iil_promptfw-0.8.0.tar.gz
- Upload date:
- Size: 74.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dd3062e8e15da703121afc8752f376db4d89810e94c819a54d4f7039f792a1d8
|
|
| MD5 |
8a1e93c5f752dde8a66977b5c2d0b00c
|
|
| BLAKE2b-256 |
46d91c59cefadbdf815e8b0f8ed3f329d1955c70a5181e1f31ef6f6bd836c1ab
|
File details
Details for the file iil_promptfw-0.8.0-py3-none-any.whl.
File metadata
- Download URL: iil_promptfw-0.8.0-py3-none-any.whl
- Upload date:
- Size: 55.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ababa8df2dc5a8cabbd3f6aff88fbce63a7cc6a12b3813004911eca65c1db88d
|
|
| MD5 |
bd1e4f3c66b652b8d4d3fdaf99362e4c
|
|
| BLAKE2b-256 |
19b4d5adf231dd8786d1d2cca411f5a0462b02dadd94b42c01ec31c106c81c5f
|