Skip to main content

Prompt Template Framework — 5-layer Jinja2 template engine for LLM applications

Project description

promptfw — Prompt Template Framework

5-layer Jinja2 template engine for LLM applications.

PyPI Python License: MIT

Installation

pip install promptfw
# With accurate token counting:
pip install promptfw[tiktoken]
# All optional dependencies:
pip install promptfw[all]

Quick Start

from promptfw import PromptStack, PromptTemplate, TemplateLayer

stack = PromptStack()
stack.register(PromptTemplate(
    id="story.task.write",
    layer=TemplateLayer.TASK,
    template="Write a {{ genre }} story about {{ topic }} in {{ words }} words.",
    variables=["genre", "topic", "words"],
))

rendered = stack.render("story.task.write", {
    "genre": "fantasy",
    "topic": "a dragon who learns to code",
    "words": 500,
})
# rendered.system  →  system prompt (SYSTEM + FORMAT layers)
# rendered.user    →  user prompt   (CONTEXT* + TASK layers)

5-Layer Stack

SYSTEM           → Role & base behaviour      (stable, cacheable)
FORMAT           → Format-specific rules       (stable, cacheable)
CONTEXT          → Generic runtime context     (dynamic)
CONTEXT_PROJECT  → Project-level context       (dynamic, v0.5.0)
CONTEXT_CHAPTER  → Chapter-level context       (dynamic, v0.5.0)
CONTEXT_SCENE    → Scene-level context         (dynamic, v0.5.0)
TASK             → Concrete task               (dynamic)
FEW_SHOT         → Examples                    (appended last)
rendered = stack.render_stack(
    ["system.base", "format.roman", "context.project", "context.scene", "task.write_scene"],
    context={
        "role": "professional author",
        "project": "The Iron Throne",
        "current_scene": "The forest at night",
        "characters": "Alice, Bob",
    }
)

Load Templates from YAML

# templates/story/task/write_scene.yaml
id: story.task.write_scene
layer: task
template: |
  Write scene {{ scene_id }}: {{ scene_description }}
  Characters: {{ characters }}
stack = PromptStack.from_directory("templates/")
rendered = stack.render("story.task.write_scene", context)

LiteLLM / OpenAI Direct Output

# render_to_messages() returns [{"role": ..., "content": ...}, ...] directly
messages = stack.render_to_messages(
    ["system.base", "few_shot.examples", "task.write"],
    context={...},
)
# Pass directly to litellm.completion() or openai.chat.completions.create()

Format-Aware Filtering

# for_format() returns a new stack with only matching templates.
# Templates with format_type=None are always included (format-agnostic).
stack = get_planning_stack()
messages = stack.for_format("academic").render_to_messages(
    ["planning.system", "planning.task.premise"],
    context={...},
)

Fallback Chains

# render_with_fallback() tries patterns in order; first match wins.
result = stack.render_with_fallback(
    [
        "writing.task.write_chapter.roman",
        "writing.task.write_chapter",
        "writing.task.default",
    ],
    context={...},
)

# get_or_fallback() on the registry level:
template = registry.get_or_fallback([
    "chapter_writer_v2",
    "chapter_writer_v1",
    "chapter_writer_default",
])

Wildcard Lookup & Version Pinning

# Wildcard
template = stack.registry.get("roman.*.scene_generation")

# Version-pinned
rendered = stack.render_stack(
    ["system.base@1.0.0", "format.roman", "task.write@2.1.0"],
    context,
)

Markdown Response Parsing

from promptfw import extract_field, extract_json

# LLMs often respond with Markdown key:value text instead of JSON
text = "**Premise:** A blacksmith discovers magic.\n**Themes:** Identity, Power"
extract_field(text, "Premise")              # → "A blacksmith discovers magic."
extract_field(text, "Themes")               # → "Identity, Power"
extract_field(text, "Missing", default="")  # → ""

# JSON extraction from fenced blocks or raw text
data = extract_json(llm_response)           # dict | None
items = extract_json_list(llm_response)     # list
data = extract_json_strict(llm_response)    # dict or raises LLMResponseError

Token Estimation

# tokens_estimate is auto-calculated at construction time when tiktoken is installed
from promptfw import PromptTemplate, TemplateLayer

tmpl = PromptTemplate(
    id="task.write",
    layer=TemplateLayer.TASK,
    template="Write a short story about {{ topic }}.",
)
print(tmpl.tokens_estimate)  # e.g. 9 (via tiktoken cl100k_base)

Django Integration

from promptfw import DjangoTemplateRegistry

# Load from Django ORM queryset
registry = DjangoTemplateRegistry.from_queryset(
    PromptTemplateModel.objects.filter(is_active=True)
)
stack = PromptStack(registry=registry)

Context Sub-Layers (v0.5.0)

For hierarchical document structures (book → chapter → scene):

from promptfw import TemplateLayer, USER_LAYERS

# Available sub-layers (rendered in this order within user prompt):
# CONTEXT → CONTEXT_PROJECT → CONTEXT_CHAPTER → CONTEXT_SCENE → TASK

stack.render_stack(
    ["sys", "context.project", "context.chapter", "context.scene", "task"],
    context={...},
)
# render_stack auto-sorts to canonical order regardless of list order

Hot Reload (dev mode)

pip install promptfw[hotreload]
stack = PromptStack.from_directory("templates/")
stack.enable_hot_reload()  # watches for YAML changes

Optional Dependencies

Extra Package Feature
tiktoken tiktoken≥0.6 Accurate token counting
hotreload watchdog≥4.0 File-system hot reload
django django≥4.2 ORM queryset adapter
all all above Everything

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

iil_promptfw-0.5.2.tar.gz (52.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

iil_promptfw-0.5.2-py3-none-any.whl (28.3 kB view details)

Uploaded Python 3

File details

Details for the file iil_promptfw-0.5.2.tar.gz.

File metadata

  • Download URL: iil_promptfw-0.5.2.tar.gz
  • Upload date:
  • Size: 52.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for iil_promptfw-0.5.2.tar.gz
Algorithm Hash digest
SHA256 0b488cf374bee588e81180924f10e09bd6be72d124c62b6091ba4109514d63da
MD5 1dff97394115c0f2c40602266cca2d00
BLAKE2b-256 9f22092453858f06d53a56d217b02b2c20491f7571c6d9c64d6dbbf2406a9777

See more details on using hashes here.

File details

Details for the file iil_promptfw-0.5.2-py3-none-any.whl.

File metadata

  • Download URL: iil_promptfw-0.5.2-py3-none-any.whl
  • Upload date:
  • Size: 28.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for iil_promptfw-0.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2af3cf661d559c08fb5341836dc49141c01a7307182f662a135e7bf5d7fd9296
MD5 37022e65c426fdc5a167299b9c2dc568
BLAKE2b-256 efc9c8472bccf77511db9b36c3e25828c2114bda5d00e3842d241224cb1c8089

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page