Skip to main content

Strict prompt-template composition with provenance tracking and dependency injection.

Project description

promptstrings

Strict prompt-template composition with provenance tracking and FastAPI-style dependency injection. Pure standard library, zero runtime dependencies.

Why

Prompt construction code tends to drift into ad-hoc string formatting that silently swallows missing variables, hides where each fragment came from, and makes it hard to know which parameters were actually consumed. promptstrings gives you:

  • Strict rendering: missing placeholders raise; unused parameters raise too (opt-in via strict=True, default for @promptstring).
  • Provenance: each rendered message carries a PromptSourceProvenance describing where its template came from (id, version, hash, provider).
  • Dependency injection: declare prompt parameters with PromptDepends(...) or AwaitPromptDepends(...) and resolve them from a PromptContext at render time.
  • Two render shapes: a single string, or a list of PromptMessage objects for chat-style APIs.

Install

pip install promptstrings

Requires Python 3.14+.

Quickstart

import asyncio
from promptstrings import promptstring, PromptContext

@promptstring
def greet(name: str) -> None:
    """Hello, {name}. Welcome to promptstrings."""

async def main() -> None:
    text = await greet.render(PromptContext(values={"name": "Ada"}))
    print(text)  # → "Hello, Ada. Welcome to promptstrings."

asyncio.run(main())

The docstring is the template. Placeholders use the minimal {identifier} grammar — no format specs, no conversions, no surprises.

Strictness

By default @promptstring is strict: every resolved parameter must appear in the template, and every placeholder must be resolved. This prevents the "silently dropped variable" class of prompt bugs.

@promptstring
def example(name: str, unused: int = 42) -> None:
    """Hello {name}."""

await example.render(PromptContext(values={"name": "Ada", "unused": 1}))
# raises PromptStrictnessError: 'unused' was resolved but not consumed

Pass strict=False to opt out.

Dependency injection

Use PromptDepends for sync resolvers and AwaitPromptDepends for async ones.

from promptstrings import promptstring, PromptDepends, PromptContext

def current_user(ctx: PromptContext) -> str:
    return ctx.require("user_name")

@promptstring
def hello(user: str = PromptDepends(current_user)) -> None:
    """Hello, {user}."""

Multiple AwaitPromptDepends resolvers in one render are supported and run concurrently via asyncio.gather. Resolvers must be cancellation-safe and must not depend on sibling side-effects.

Generator form

For multi-message prompts (system + user, or alternating turns), use @promptstring_generator. Yield Role(...) to switch role, yield strings to append, yield PromptMessage(...) to emit a fully-formed message.

from promptstrings import promptstring_generator, Role

@promptstring_generator
def conversation(topic: str):
    yield Role("system")
    yield f"You are an expert on {topic}."
    yield Role("user")
    yield f"Tell me about {topic}."

Join semantics: render() joins multiple messages with "\n\n" (double newline). Within a single message, consecutive string yields are joined with "\n". Use render_messages() to get individual PromptMessage objects and join them yourself.

Type annotations

Use the Promptstring Protocol to annotate prompt objects in function signatures — it is stable across 1.x and does not expose internal classes:

from promptstrings import Promptstring, PromptContext

async def call_llm(prompt: Promptstring, ctx: PromptContext) -> str:
    messages = await prompt.render_messages(ctx)
    ...

Dynamic templates (t-strings)

For prompts built at runtime — for example, from a function argument or database-loaded string — return a Python 3.14 t-string (t"...") annotated -> Template:

from string.templatelib import Template
from promptstrings import promptstring, PromptContext

@promptstring
def greet(name: str) -> Template:
    return t"Hello, {name}."

text = await greet.render(PromptContext({"name": "Ada"}))

The t-string path is injection-safe: Python evaluates all expressions before the function returns; the framework never re-parses the resulting string.

For externally loaded template strings (database, config), use parse_trusted_template:

from string.templatelib import Template
from promptstrings import promptstring, parse_trusted_template, PromptContext

template_from_db = "You are an expert on {topic}."  # trusted, not user-supplied

@promptstring
def system(topic: str) -> Template:
    return parse_trusted_template(template_from_db)

Security: only pass trusted strings to parse_trusted_template. User-controlled input containing {param_name} syntax will be substituted.

Provenance

Attach provenance metadata to rendered messages by returning a PromptSource with a PromptSourceProvenance. The content field of PromptSource is a literal string — no placeholder substitution occurs. For dynamic content with provenance, use @promptstring_generator and yield PromptMessage objects directly:

from promptstrings import promptstring_generator, Role, PromptMessage, PromptSourceProvenance

prov = PromptSourceProvenance(source_id="system-v2", version="2026-04-27")

@promptstring_generator
def system_prompt(topic: str):
    yield PromptMessage(
        role="system",
        content=f"You are an expert on {topic}.",
        source=prov,
    )

For a static template with provenance, use PromptSource with literal content:

from promptstrings import promptstring, PromptSource, PromptSourceProvenance

@promptstring(strict=False)
def static_prompt() -> PromptSource:
    return PromptSource(
        content="You are a helpful assistant.",
        provenance=PromptSourceProvenance(source_id="assistant-v1"),
    )

Observers

Promptstrings is a configuration carrier that attaches a shared Observer to multiple prompt functions. Observers receive RenderStartEvent, RenderEndEvent, and RenderErrorEvent objects for every render call — useful for logging, metrics, and tracing.

from promptstrings import Promptstrings, Observer, RenderStartEvent, RenderEndEvent, RenderErrorEvent

class LogObserver:
    def on_event(self, event: RenderStartEvent | RenderEndEvent | RenderErrorEvent) -> None:
        print(f"[{type(event).__name__}] {event.prompt_name}")

ps = Promptstrings(observer=LogObserver())

@ps.promptstring
def greet(name: str) -> None:
    """Hello, {name}."""

@ps.promptstring_generator
def chat(topic: str):
    yield Role("system")
    yield f"You are an expert on {topic}."

Stability

Pre-1.0. The API is stable in practice (used internally by femtobot) but minor breaks may occur before 1.0.

Design and architecture

The functional vision and 1.0 contract are documented under design/. Start here:

License

MIT.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

promptstrings-1.0.0.tar.gz (155.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

promptstrings-1.0.0-py3-none-any.whl (15.8 kB view details)

Uploaded Python 3

File details

Details for the file promptstrings-1.0.0.tar.gz.

File metadata

  • Download URL: promptstrings-1.0.0.tar.gz
  • Upload date:
  • Size: 155.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for promptstrings-1.0.0.tar.gz
Algorithm Hash digest
SHA256 c6f38c52f6c5f71ccb509bd524e9d11e64b179b6f5493dd8238eb6559a226400
MD5 59c076b8fc9e6a741b59d9b301f9414b
BLAKE2b-256 6df77bd4210d8c21cbe477be4c13a40da8ca79157cdc8f93979c52f31f3110c9

See more details on using hashes here.

Provenance

The following attestation bundles were made for promptstrings-1.0.0.tar.gz:

Publisher: workflow.yml on uthunderbird/promptstrings

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file promptstrings-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: promptstrings-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 15.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for promptstrings-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0439329696577c49c5e9b83384876a6bc1c0f06bc1f91baac19fb9528261cfb4
MD5 c6944c2c93f17c7d1395e7a368bc1517
BLAKE2b-256 163463af2d56ca1f7b1994462cbd1f3b73b7b6512030ee3e0f4cd27e6c654211

See more details on using hashes here.

Provenance

The following attestation bundles were made for promptstrings-1.0.0-py3-none-any.whl:

Publisher: workflow.yml on uthunderbird/promptstrings

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page