Prompt versioning, output validation, and run logging for LLM pipelines
Project description
promptlock
Prompt versioning, output validation, and run logging for LLM pipelines.
Most LLM engineering pain comes from three places: prompts that drift without anyone noticing, outputs that break downstream logic, and no record of what ran when. promptlock fixes all three — no cloud account, no dashboard, no framework lock-in.
pip install promptlock
Why promptlock
| Problem | What promptlock gives you |
|---|---|
| Prompts change silently and break things | Version-controlled prompt registry backed by a local YAML file |
| LLM output shape is unpredictable | Output contracts that validate structure, length, and patterns |
| No record of what ran in production | SQLite run logger with filtering and summary stats |
Install
pip install promptlock
# or with uv
uv add promptlock
Requires Python 3.11+. Only one external dependency: pyyaml.
Quickstart
from promptlock import PromptRegistry, OutputContract, RunLogger
from promptlock.exceptions import ContractViolation
# 1. Save and load versioned prompts
registry = PromptRegistry("prompts.yaml")
registry.save("summarizer", "v1.0", "Summarize this document: {doc}\nLanguage: {lang}")
template = registry.load("summarizer", version="latest")
rendered = template.render(doc="AI is transforming healthcare.", lang="English")
# 2. Validate the LLM output
contract = OutputContract(
required_fields=["summary", "keywords"],
max_length=500,
min_length=20,
)
llm_output = {"summary": "AI aids diagnostics.", "keywords": ["ai", "health"]}
try:
contract.validate(llm_output)
validated = True
error = None
except ContractViolation as e:
validated = False
error = str(e)
# 3. Log the run
logger = RunLogger("runs.db")
logger.log(
prompt_name="summarizer",
version="v1.0",
model="gpt-4o",
input=rendered,
output=llm_output,
validated=validated,
error=error,
)
Modules
PromptRegistry
Store and retrieve prompt versions from a local YAML file.
from promptlock import PromptRegistry
registry = PromptRegistry("prompts.yaml")
# save a prompt version
registry.save("classifier", "v1.0", "Classify the following text: {text}")
registry.save("classifier", "v1.1", "Classify this as positive/negative/neutral: {text}")
# load a specific version
template = registry.load("classifier", version="v1.0")
# load the most recent version
template = registry.load("classifier", version="latest")
# list all prompts and versions
registry.list_prompts()
# {'classifier': ['v1.0', 'v1.1']}
# delete a version or all versions
registry.delete("classifier", version="v1.0")
registry.delete("classifier")
PromptTemplate
Render prompt strings with named placeholders.
from promptlock import PromptTemplate
template = PromptTemplate("Translate this to {lang}: {text}")
print(template.variables)
# ['lang', 'text']
rendered = template.render(lang="French", text="Hello world")
# 'Translate this to French: Hello world'
# missing variables raise TemplateRenderError
template.render(lang="French")
# TemplateRenderError: Missing required template variables: ['text']
OutputContract
Define and validate the expected shape of an LLM output.
from promptlock import OutputContract
from promptlock.exceptions import ContractViolation
# validate a JSON output
contract = OutputContract(
required_fields=["summary", "keywords"],
max_length=500,
min_length=20,
)
contract.validate({"summary": "Short summary.", "keywords": ["ai"]}) # passes
# validate a plain string
sentiment_contract = OutputContract(
allowed_values=["positive", "negative", "neutral"]
)
sentiment_contract.validate("positive") # passes
sentiment_contract.validate("unknown") # raises ContractViolation
# validate with regex
contract = OutputContract(regex_patterns=[r"\d{4}"])
contract.validate("Report from 2025") # passes
contract.validate("No year here") # raises ContractViolation
Available rules:
| Rule | Type | Description |
|---|---|---|
required_fields |
list[str] |
Keys that must exist in a JSON output |
max_length |
int |
Maximum character length of the output |
min_length |
int |
Minimum character length of the output |
regex_patterns |
list[str] |
Patterns the output must match (all must pass) |
allowed_values |
list[str] |
Output must be one of these exact strings |
RunLogger
Log every LLM run to a local SQLite file.
from promptlock import RunLogger
logger = RunLogger("runs.db")
logger.log(
prompt_name="summarizer",
version="v1.1",
model="gpt-4o",
input="Summarize this: ...",
output={"summary": "AI is evolving.", "keywords": ["ai"]},
validated=True,
)
# retrieve runs with filters
logger.get_runs(prompt_name="summarizer", validated="failed", limit=10)
# quick summary of pass/fail counts
logger.summary("summarizer")
# {'passed': 42, 'failed': 3, 'not_checked': 5}
Exceptions
All exceptions inherit from PromptlockError so you can catch broadly or specifically.
from promptlock.exceptions import (
PromptlockError, # base exception
ContractViolation, # output failed validation
PromptNotFound, # prompt name/version not in registry
TemplateRenderError, # missing variable during render
)
Project structure
src/promptlock/
├── __init__.py # public API
├── registry.py # PromptRegistry
├── template.py # PromptTemplate
├── contract.py # OutputContract
├── logger.py # RunLogger
└── exceptions.py # custom exceptions
Contributing
Pull requests are welcome. For major changes, please open an issue first.
git clone https://github.com/NorthCommits/Promptlock
cd Promptlock
uv sync
uv run pytest -v
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file promptlock-0.1.0.tar.gz.
File metadata
- Download URL: promptlock-0.1.0.tar.gz
- Upload date:
- Size: 6.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5add2b5d916e4193fd41be0f0aa76a83dcd39a76f9add7d06db309a6f0e91a6e
|
|
| MD5 |
8a1247739817867e75090d77e279c0f5
|
|
| BLAKE2b-256 |
6a3418a9bc37922934a57e31cea61ef735c715741f1f4bb37602b23e99532dc4
|
File details
Details for the file promptlock-0.1.0-py3-none-any.whl.
File metadata
- Download URL: promptlock-0.1.0-py3-none-any.whl
- Upload date:
- Size: 9.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
454b8854e21f560e92ec5023b46457a5448b9821dd25957309f77c29eb2770ff
|
|
| MD5 |
1a36d0a91f3bd465332a632d1ed0958c
|
|
| BLAKE2b-256 |
25cec3463849a2aacccfec79166c25e3fe5d3a13c136329a153f89152fbe3762
|