Skip to main content

Prompt Template Framework — 5-layer Jinja2 template engine for LLM applications

Project description

promptfw — Prompt Template Framework

5-layer Jinja2 template engine for LLM applications.

PyPI Python License: MIT

Installation

pip install promptfw
# With accurate token counting:
pip install promptfw[tiktoken]
# All optional dependencies:
pip install promptfw[all]

Quick Start

from promptfw import PromptStack, PromptTemplate, TemplateLayer

stack = PromptStack()
stack.register(PromptTemplate(
    id="story.task.write",
    layer=TemplateLayer.TASK,
    template="Write a {{ genre }} story about {{ topic }} in {{ words }} words.",
    variables=["genre", "topic", "words"],
))

rendered = stack.render("story.task.write", {
    "genre": "fantasy",
    "topic": "a dragon who learns to code",
    "words": 500,
})
# rendered.system  →  system prompt (SYSTEM + FORMAT layers)
# rendered.user    →  user prompt   (CONTEXT* + TASK layers)

5-Layer Stack

SYSTEM           → Role & base behaviour      (stable, cacheable)
FORMAT           → Format-specific rules       (stable, cacheable)
CONTEXT          → Generic runtime context     (dynamic)
CONTEXT_PROJECT  → Project-level context       (dynamic, v0.5.0)
CONTEXT_CHAPTER  → Chapter-level context       (dynamic, v0.5.0)
CONTEXT_SCENE    → Scene-level context         (dynamic, v0.5.0)
TASK             → Concrete task               (dynamic)
FEW_SHOT         → Examples                    (appended last)
rendered = stack.render_stack(
    ["system.base", "format.roman", "context.project", "context.scene", "task.write_scene"],
    context={
        "role": "professional author",
        "project": "The Iron Throne",
        "current_scene": "The forest at night",
        "characters": "Alice, Bob",
    }
)

Load Templates from YAML

# templates/story/task/write_scene.yaml
id: story.task.write_scene
layer: task
template: |
  Write scene {{ scene_id }}: {{ scene_description }}
  Characters: {{ characters }}
stack = PromptStack.from_directory("templates/")
rendered = stack.render("story.task.write_scene", context)

LiteLLM / OpenAI Direct Output

# render_to_messages() returns [{"role": ..., "content": ...}, ...] directly
messages = stack.render_to_messages(
    ["system.base", "few_shot.examples", "task.write"],
    context={...},
)
# Pass directly to litellm.completion() or openai.chat.completions.create()

Format-Aware Filtering

# for_format() returns a new stack with only matching templates.
# Templates with format_type=None are always included (format-agnostic).
stack = get_planning_stack()
messages = stack.for_format("academic").render_to_messages(
    ["planning.system", "planning.task.premise"],
    context={...},
)

Fallback Chains

# render_with_fallback() tries patterns in order; first match wins.
result = stack.render_with_fallback(
    [
        "writing.task.write_chapter.roman",
        "writing.task.write_chapter",
        "writing.task.default",
    ],
    context={...},
)

# get_or_fallback() on the registry level:
template = registry.get_or_fallback([
    "chapter_writer_v2",
    "chapter_writer_v1",
    "chapter_writer_default",
])

Wildcard Lookup & Version Pinning

# Wildcard
template = stack.registry.get("roman.*.scene_generation")

# Version-pinned
rendered = stack.render_stack(
    ["system.base@1.0.0", "format.roman", "task.write@2.1.0"],
    context,
)

Markdown Response Parsing

from promptfw import extract_field, extract_json

# LLMs often respond with Markdown key:value text instead of JSON
text = "**Premise:** A blacksmith discovers magic.\n**Themes:** Identity, Power"
extract_field(text, "Premise")              # → "A blacksmith discovers magic."
extract_field(text, "Themes")               # → "Identity, Power"
extract_field(text, "Missing", default="")  # → ""

# JSON extraction from fenced blocks or raw text
data = extract_json(llm_response)           # dict | None
items = extract_json_list(llm_response)     # list
data = extract_json_strict(llm_response)    # dict or raises LLMResponseError

Token Estimation

# tokens_estimate is auto-calculated at construction time when tiktoken is installed
from promptfw import PromptTemplate, TemplateLayer

tmpl = PromptTemplate(
    id="task.write",
    layer=TemplateLayer.TASK,
    template="Write a short story about {{ topic }}.",
)
print(tmpl.tokens_estimate)  # e.g. 9 (via tiktoken cl100k_base)

Django Integration

from promptfw import DjangoTemplateRegistry

# Load from Django ORM queryset
registry = DjangoTemplateRegistry.from_queryset(
    PromptTemplateModel.objects.filter(is_active=True)
)
stack = PromptStack(registry=registry)

Context Sub-Layers (v0.5.0)

For hierarchical document structures (book → chapter → scene):

from promptfw import TemplateLayer, USER_LAYERS

# Available sub-layers (rendered in this order within user prompt):
# CONTEXT → CONTEXT_PROJECT → CONTEXT_CHAPTER → CONTEXT_SCENE → TASK

stack.render_stack(
    ["sys", "context.project", "context.chapter", "context.scene", "task"],
    context={...},
)
# render_stack auto-sorts to canonical order regardless of list order

Hot Reload (dev mode)

pip install promptfw[hotreload]
stack = PromptStack.from_directory("templates/")
stack.enable_hot_reload()  # watches for YAML changes

Optional Dependencies

Extra Package Feature
tiktoken tiktoken≥0.6 Accurate token counting
hotreload watchdog≥4.0 File-system hot reload
django django≥4.2 ORM queryset adapter
all all above Everything

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

iil_promptfw-0.5.5.tar.gz (52.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

iil_promptfw-0.5.5-py3-none-any.whl (28.6 kB view details)

Uploaded Python 3

File details

Details for the file iil_promptfw-0.5.5.tar.gz.

File metadata

  • Download URL: iil_promptfw-0.5.5.tar.gz
  • Upload date:
  • Size: 52.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for iil_promptfw-0.5.5.tar.gz
Algorithm Hash digest
SHA256 c927c328028bb5a9080138ca96c750ed76b08917a8351664a8064c498cca8e4e
MD5 39a9feba8a0345f6dd3d882a2b24cf94
BLAKE2b-256 915047ba17bf41ad107433a64e0a475e15a09b4cf4887ac9a78a8f130bf74d97

See more details on using hashes here.

File details

Details for the file iil_promptfw-0.5.5-py3-none-any.whl.

File metadata

  • Download URL: iil_promptfw-0.5.5-py3-none-any.whl
  • Upload date:
  • Size: 28.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for iil_promptfw-0.5.5-py3-none-any.whl
Algorithm Hash digest
SHA256 1c732f7420fc9867c416390a6c1d0fda43b4594743ba6b34a8e65a2815a49b18
MD5 057f24b3f1a6ef1eb856778c37c717e1
BLAKE2b-256 5048b0cf1020e1892b57bbb19a13c4dd8751dc30f1eb08ad6add7d192fe4cd29

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page