Strict prompt-template composition with provenance tracking and dependency injection.
Project description
promptstrings
Strict prompt-template composition with provenance tracking and FastAPI-style dependency injection. Pure standard library, zero runtime dependencies.
Why
Prompt construction code tends to drift into ad-hoc string formatting that
silently swallows missing variables, hides where each fragment came from, and
makes it hard to know which parameters were actually consumed. promptstrings
gives you:
- Strict rendering: missing placeholders raise; unused parameters raise too
(opt-in via
strict=True, default for@promptstring). - Provenance: each rendered message carries a
PromptSourceProvenancedescribing where its template came from (id, version, hash, provider). - Dependency injection: declare prompt parameters with
Annotated[T, PromptDepends(...)]orAnnotated[T, AwaitPromptDepends(...)]and resolve them from aPromptContextat render time. - Two render shapes: a single string, or a list of
PromptMessageobjects for chat-style APIs.
Install
pip install promptstrings
Requires Python 3.14+.
Quickstart
import asyncio
from promptstrings import promptstring, PromptContext
@promptstring
def greet(name: str) -> None:
"""Hello, {name}. Welcome to promptstrings."""
async def main() -> None:
text = await greet.render(PromptContext(values={"name": "Ada"}))
print(text) # → "Hello, Ada. Welcome to promptstrings."
asyncio.run(main())
The docstring is the template. Placeholders use the minimal {identifier}
grammar — no format specs, no conversions, no surprises.
Strictness
By default @promptstring is strict: every resolved parameter must appear in
the template, and every placeholder must be resolved. This prevents the
"silently dropped variable" class of prompt bugs.
@promptstring
def example(name: str, unused: int = 42) -> None:
"""Hello {name}."""
await example.render(PromptContext(values={"name": "Ada", "unused": 1}))
# raises PromptStrictnessError: 'unused' was resolved but not consumed
Pass strict=False to opt out.
Dependency injection
Declare resolver dependencies using typing.Annotated:
from typing import Annotated
from promptstrings import promptstring, PromptDepends, AwaitPromptDepends, PromptContext
def current_user(ctx: PromptContext) -> str:
return ctx.require("user_name")
async def load_profile(ctx: PromptContext) -> str:
return await fetch_profile(ctx.require("user_id"))
@promptstring
def hello(
user: Annotated[str, PromptDepends(current_user)],
profile: Annotated[str, AwaitPromptDepends(load_profile)],
) -> None:
"""Hello, {user}. {profile}"""
Multiple AwaitPromptDepends resolvers run concurrently. If one raises, the rest are cancelled before the exception propagates — resolvers can use try/finally for cleanup.
Integrations
dishka
pip install promptstrings[dishka]
Use DishkaContext to pass a dishka AsyncContainer, and From(Type) as an Annotated marker to resolve from it:
from typing import Annotated
from promptstrings import promptstring, PromptContext
from promptstrings.integrations.dishka import DishkaContext, From
@promptstring
def greet(user: Annotated[User, From(User)]) -> None:
"""Hello, {user.name}!"""
ctx = DishkaContext(container=my_container)
result = await greet.render(ctx)
pydantic
pip install promptstrings[pydantic]
PydanticPromptContext.from_model() populates context values from a Pydantic v2 model:
from pydantic import BaseModel
from promptstrings.integrations.pydantic import PydanticPromptContext
class Request(BaseModel):
user: str
topic: str
ctx = PydanticPromptContext.from_model(Request(user="Ada", topic="AI"))
result = await my_prompt.render(ctx)
Pass dump_mode='json' to serialize datetimes and other types to JSON-compatible values.
Generator form
For multi-message prompts (system + user, or alternating turns), use
@promptstring_generator. Yield Role(...) to switch role, yield strings to
append, yield PromptMessage(...) to emit a fully-formed message.
from promptstrings import promptstring_generator, Role
@promptstring_generator
def conversation(topic: str):
yield Role("system")
yield f"You are an expert on {topic}."
yield Role("user")
yield f"Tell me about {topic}."
Join semantics: render() joins multiple messages with "\n\n" (double
newline). Within a single message, consecutive string yields are joined with
"\n". Use render_messages() to get individual PromptMessage objects and
join them yourself.
Type annotations
Use the Promptstring Protocol to annotate prompt objects in function
signatures — it is stable across 1.x and does not expose internal classes:
from promptstrings import Promptstring, PromptContext
async def call_llm(prompt: Promptstring, ctx: PromptContext) -> str:
messages = await prompt.render_messages(ctx)
...
Dynamic templates (t-strings)
For prompts built at runtime — for example, from a function argument or
database-loaded string — return a Python 3.14 t-string (t"...") annotated
-> Template:
from string.templatelib import Template
from promptstrings import promptstring, PromptContext
@promptstring
def greet(name: str) -> Template:
return t"Hello, {name}."
text = await greet.render(PromptContext({"name": "Ada"}))
The t-string path is injection-safe: Python evaluates all expressions before the function returns; the framework never re-parses the resulting string.
For externally loaded template strings (database, config), use
parse_trusted_template:
from string.templatelib import Template
from promptstrings import promptstring, parse_trusted_template, PromptContext
template_from_db = "You are an expert on {topic}." # trusted, not user-supplied
@promptstring
def system(topic: str) -> Template:
return parse_trusted_template(template_from_db)
Security: only pass trusted strings to
parse_trusted_template. User-controlled input containing{param_name}syntax will be substituted.
Provenance
Attach provenance metadata to rendered messages by returning a PromptSource
with a PromptSourceProvenance. The content field of PromptSource is a
literal string — no placeholder substitution occurs. For dynamic content
with provenance, use @promptstring_generator and yield PromptMessage
objects directly:
from promptstrings import promptstring_generator, Role, PromptMessage, PromptSourceProvenance
prov = PromptSourceProvenance(source_id="system-v2", version="2026-04-27")
@promptstring_generator
def system_prompt(topic: str):
yield PromptMessage(
role="system",
content=f"You are an expert on {topic}.",
source=prov,
)
For a static template with provenance, use PromptSource with literal content:
from promptstrings import promptstring, PromptSource, PromptSourceProvenance
@promptstring(strict=False)
def static_prompt() -> PromptSource:
return PromptSource(
content="You are a helpful assistant.",
provenance=PromptSourceProvenance(source_id="assistant-v1"),
)
Observers
Promptstrings is a configuration carrier that attaches a shared Observer
to multiple prompt functions. Observers receive RenderStartEvent,
RenderEndEvent, and RenderErrorEvent objects for every render call — useful
for logging, metrics, and tracing.
from promptstrings import Promptstrings, Observer, RenderStartEvent, RenderEndEvent, RenderErrorEvent
class LogObserver:
def on_event(self, event: RenderStartEvent | RenderEndEvent | RenderErrorEvent) -> None:
print(f"[{type(event).__name__}] {event.prompt_name}")
ps = Promptstrings(observer=LogObserver())
@ps.promptstring
def greet(name: str) -> None:
"""Hello, {name}."""
@ps.promptstring_generator
def chat(topic: str):
yield Role("system")
yield f"You are an expert on {topic}."
Stability
Stable. The library follows SemVer from 1.0 — breaking changes require a major
version bump. The full API contract is documented in
design/decisions/0001.
Design and architecture
The functional vision and 1.0 contract are documented under
design/. Start here:
design/VISION.md— single source of truth for why the library exists: the problems it solves and how its developer experience answers them. Updated in place, versioned viavision_version.design/decisions/0001-api-and-dx-baseline-for-1.0.md— the locked SemVer contract (13 promises, 12 non-promises, lifecycle map, DX rubric R1–R10). The canonical contract.design/decisions/0002-integration-seams-for-1.0.md— extension surface for 1.0:Promptstringsconfiguration carrier,ObserverProtocol,PromptContext.extras, and per-vendor adapter model. The canonical contract for integration.- (historical proposals preserved for the red-team trace and
rationale:
design/proposals/api-1.0-baseline.md,design/proposals/api-1.0-integrations.md) design/glossary.md— canonical vocabulary used across all design docs.design/README.md— directory map and conventions for adding decisions, proposals, and DX deep-dives.
License
MIT.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file promptstrings-1.2.0.tar.gz.
File metadata
- Download URL: promptstrings-1.2.0.tar.gz
- Upload date:
- Size: 176.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
edb0d69e65495091b0bbaaba28e9e7e36c5cee0a8859986b88c77cd4f7d27c39
|
|
| MD5 |
1576104dbe45a32eaadf5d69c315d2fb
|
|
| BLAKE2b-256 |
c03201d6b26d9510423438ff5fb1682a2beea5c78e651160eeb7be5768ffd960
|
Provenance
The following attestation bundles were made for promptstrings-1.2.0.tar.gz:
Publisher:
workflow.yml on uthunderbird/promptstrings
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
promptstrings-1.2.0.tar.gz -
Subject digest:
edb0d69e65495091b0bbaaba28e9e7e36c5cee0a8859986b88c77cd4f7d27c39 - Sigstore transparency entry: 1393230038
- Sigstore integration time:
-
Permalink:
uthunderbird/promptstrings@41ea4db7fa062de31f791d9938bc142bde262667 -
Branch / Tag:
refs/tags/v1.2.0 - Owner: https://github.com/uthunderbird
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
workflow.yml@41ea4db7fa062de31f791d9938bc142bde262667 -
Trigger Event:
push
-
Statement type:
File details
Details for the file promptstrings-1.2.0-py3-none-any.whl.
File metadata
- Download URL: promptstrings-1.2.0-py3-none-any.whl
- Upload date:
- Size: 19.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ebadcf32a5d2c008c775019cb361a1b487aa97fe8c26bc6f8fd42297f8a5b859
|
|
| MD5 |
c831d82b321382dfd403e2944a9ba4fa
|
|
| BLAKE2b-256 |
4f294768dc5101561d6d1530ead025905914a9946b66f4e4ac7cf1cdd687d9c9
|
Provenance
The following attestation bundles were made for promptstrings-1.2.0-py3-none-any.whl:
Publisher:
workflow.yml on uthunderbird/promptstrings
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
promptstrings-1.2.0-py3-none-any.whl -
Subject digest:
ebadcf32a5d2c008c775019cb361a1b487aa97fe8c26bc6f8fd42297f8a5b859 - Sigstore transparency entry: 1393230062
- Sigstore integration time:
-
Permalink:
uthunderbird/promptstrings@41ea4db7fa062de31f791d9938bc142bde262667 -
Branch / Tag:
refs/tags/v1.2.0 - Owner: https://github.com/uthunderbird
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
workflow.yml@41ea4db7fa062de31f791d9938bc142bde262667 -
Trigger Event:
push
-
Statement type: