Skip to main content

Prompt Template Framework — 5-layer Jinja2 template engine for LLM applications

Project description

promptfw — Prompt Template Framework

5-layer Jinja2 template engine for LLM applications.

PyPI Python License: MIT

Installation

pip install promptfw
# With accurate token counting:
pip install promptfw[tiktoken]
# All optional dependencies:
pip install promptfw[all]

Quick Start

from promptfw import PromptStack, PromptTemplate, TemplateLayer

stack = PromptStack()
stack.register(PromptTemplate(
    id="story.task.write",
    layer=TemplateLayer.TASK,
    template="Write a {{ genre }} story about {{ topic }} in {{ words }} words.",
    variables=["genre", "topic", "words"],
))

rendered = stack.render("story.task.write", {
    "genre": "fantasy",
    "topic": "a dragon who learns to code",
    "words": 500,
})
# rendered.system  →  system prompt (SYSTEM + FORMAT layers)
# rendered.user    →  user prompt   (CONTEXT* + TASK layers)

5-Layer Stack

SYSTEM           → Role & base behaviour      (stable, cacheable)
FORMAT           → Format-specific rules       (stable, cacheable)
CONTEXT          → Generic runtime context     (dynamic)
CONTEXT_PROJECT  → Project-level context       (dynamic, v0.5.0)
CONTEXT_CHAPTER  → Chapter-level context       (dynamic, v0.5.0)
CONTEXT_SCENE    → Scene-level context         (dynamic, v0.5.0)
TASK             → Concrete task               (dynamic)
FEW_SHOT         → Examples                    (appended last)
rendered = stack.render_stack(
    ["system.base", "format.roman", "context.project", "context.scene", "task.write_scene"],
    context={
        "role": "professional author",
        "project": "The Iron Throne",
        "current_scene": "The forest at night",
        "characters": "Alice, Bob",
    }
)

Load Templates from YAML

# templates/story/task/write_scene.yaml
id: story.task.write_scene
layer: task
template: |
  Write scene {{ scene_id }}: {{ scene_description }}
  Characters: {{ characters }}
stack = PromptStack.from_directory("templates/")
rendered = stack.render("story.task.write_scene", context)

LiteLLM / OpenAI Direct Output

# render_to_messages() returns [{"role": ..., "content": ...}, ...] directly
messages = stack.render_to_messages(
    ["system.base", "few_shot.examples", "task.write"],
    context={...},
)
# Pass directly to litellm.completion() or openai.chat.completions.create()

Format-Aware Filtering

# for_format() returns a new stack with only matching templates.
# Templates with format_type=None are always included (format-agnostic).
stack = get_planning_stack()
messages = stack.for_format("academic").render_to_messages(
    ["planning.system", "planning.task.premise"],
    context={...},
)

Fallback Chains

# render_with_fallback() tries patterns in order; first match wins.
result = stack.render_with_fallback(
    [
        "writing.task.write_chapter.roman",
        "writing.task.write_chapter",
        "writing.task.default",
    ],
    context={...},
)

# get_or_fallback() on the registry level:
template = registry.get_or_fallback([
    "chapter_writer_v2",
    "chapter_writer_v1",
    "chapter_writer_default",
])

Wildcard Lookup & Version Pinning

# Wildcard
template = stack.registry.get("roman.*.scene_generation")

# Version-pinned
rendered = stack.render_stack(
    ["system.base@1.0.0", "format.roman", "task.write@2.1.0"],
    context,
)

Markdown Response Parsing

from promptfw import extract_field, extract_json

# LLMs often respond with Markdown key:value text instead of JSON
text = "**Premise:** A blacksmith discovers magic.\n**Themes:** Identity, Power"
extract_field(text, "Premise")              # → "A blacksmith discovers magic."
extract_field(text, "Themes")               # → "Identity, Power"
extract_field(text, "Missing", default="")  # → ""

# JSON extraction from fenced blocks or raw text
data = extract_json(llm_response)           # dict | None
items = extract_json_list(llm_response)     # list
data = extract_json_strict(llm_response)    # dict or raises LLMResponseError

Token Estimation

# tokens_estimate is auto-calculated at construction time when tiktoken is installed
from promptfw import PromptTemplate, TemplateLayer

tmpl = PromptTemplate(
    id="task.write",
    layer=TemplateLayer.TASK,
    template="Write a short story about {{ topic }}.",
)
print(tmpl.tokens_estimate)  # e.g. 9 (via tiktoken cl100k_base)

Django Integration

from promptfw import DjangoTemplateRegistry

# Load from Django ORM queryset
registry = DjangoTemplateRegistry.from_queryset(
    PromptTemplateModel.objects.filter(is_active=True)
)
stack = PromptStack(registry=registry)

Context Sub-Layers (v0.5.0)

For hierarchical document structures (book → chapter → scene):

from promptfw import TemplateLayer, USER_LAYERS

# Available sub-layers (rendered in this order within user prompt):
# CONTEXT → CONTEXT_PROJECT → CONTEXT_CHAPTER → CONTEXT_SCENE → TASK

stack.render_stack(
    ["sys", "context.project", "context.chapter", "context.scene", "task"],
    context={...},
)
# render_stack auto-sorts to canonical order regardless of list order

Hot Reload (dev mode)

pip install promptfw[hotreload]
stack = PromptStack.from_directory("templates/")
stack.enable_hot_reload()  # watches for YAML changes

Optional Dependencies

Extra Package Feature
tiktoken tiktoken≥0.6 Accurate token counting
hotreload watchdog≥4.0 File-system hot reload
django django≥4.2 ORM queryset adapter
all all above Everything

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

iil_promptfw-0.5.4.tar.gz (52.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

iil_promptfw-0.5.4-py3-none-any.whl (28.3 kB view details)

Uploaded Python 3

File details

Details for the file iil_promptfw-0.5.4.tar.gz.

File metadata

  • Download URL: iil_promptfw-0.5.4.tar.gz
  • Upload date:
  • Size: 52.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for iil_promptfw-0.5.4.tar.gz
Algorithm Hash digest
SHA256 b1862a9b2c5562430105b83dc9a700b80ab2c876f0734e02c406af3f0af89262
MD5 30c7353dbda46b93326381009781b2ac
BLAKE2b-256 bdfa951d0de80ea5f3e90621a151a925847ccb84eec15cae5ec9848318baf27b

See more details on using hashes here.

File details

Details for the file iil_promptfw-0.5.4-py3-none-any.whl.

File metadata

  • Download URL: iil_promptfw-0.5.4-py3-none-any.whl
  • Upload date:
  • Size: 28.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for iil_promptfw-0.5.4-py3-none-any.whl
Algorithm Hash digest
SHA256 da846d1f1164cdb4ed4111744dbf4852c2f26a06f3202c14bc8f2fd28a0a37c3
MD5 0a0667d388cf387db7104bc0960976ab
BLAKE2b-256 2f72012ef672682cab922439556b66140a1b53e95fd9c6148685d9617e00465e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page