Skip to main content

Prompt versioning, output validation, and run logging for LLM pipelines

Project description

promptlock

Prompt versioning, output validation, and run logging for LLM pipelines.

Most LLM engineering pain comes from three places: prompts that drift without anyone noticing, outputs that break downstream logic, and no record of what ran when. promptlock fixes all three — no cloud account, no dashboard, no framework lock-in.

pip install promptlock

Why promptlock

Problem What promptlock gives you
Prompts change silently and break things Version-controlled prompt registry backed by a local YAML file
LLM output shape is unpredictable Output contracts that validate structure, length, and patterns
No record of what ran in production SQLite run logger with filtering and summary stats

Install

pip install promptlock
# or with uv
uv add promptlock

Requires Python 3.11+. Only one external dependency: pyyaml.


Quickstart

from promptlock import PromptRegistry, OutputContract, RunLogger
from promptlock.exceptions import ContractViolation

# 1. Save and load versioned prompts
registry = PromptRegistry("prompts.yaml")
registry.save("summarizer", "v1.0", "Summarize this document: {doc}\nLanguage: {lang}")

template = registry.load("summarizer", version="latest")
rendered = template.render(doc="AI is transforming healthcare.", lang="English")

# 2. Validate the LLM output
contract = OutputContract(
    required_fields=["summary", "keywords"],
    max_length=500,
    min_length=20,
)

llm_output = {"summary": "AI aids diagnostics.", "keywords": ["ai", "health"]}

try:
    contract.validate(llm_output)
    validated = True
    error = None
except ContractViolation as e:
    validated = False
    error = str(e)

# 3. Log the run
logger = RunLogger("runs.db")
logger.log(
    prompt_name="summarizer",
    version="v1.0",
    model="gpt-4o",
    input=rendered,
    output=llm_output,
    validated=validated,
    error=error,
)

Modules

PromptRegistry

Store and retrieve prompt versions from a local YAML file.

from promptlock import PromptRegistry

registry = PromptRegistry("prompts.yaml")

# save a prompt version
registry.save("classifier", "v1.0", "Classify the following text: {text}")
registry.save("classifier", "v1.1", "Classify this as positive/negative/neutral: {text}")

# load a specific version
template = registry.load("classifier", version="v1.0")

# load the most recent version
template = registry.load("classifier", version="latest")

# list all prompts and versions
registry.list_prompts()
# {'classifier': ['v1.0', 'v1.1']}

# delete a version or all versions
registry.delete("classifier", version="v1.0")
registry.delete("classifier")

PromptTemplate

Render prompt strings with named placeholders.

from promptlock import PromptTemplate

template = PromptTemplate("Translate this to {lang}: {text}")

print(template.variables)
# ['lang', 'text']

rendered = template.render(lang="French", text="Hello world")
# 'Translate this to French: Hello world'

# missing variables raise TemplateRenderError
template.render(lang="French")
# TemplateRenderError: Missing required template variables: ['text']

OutputContract

Define and validate the expected shape of an LLM output.

from promptlock import OutputContract
from promptlock.exceptions import ContractViolation

# validate a JSON output
contract = OutputContract(
    required_fields=["summary", "keywords"],
    max_length=500,
    min_length=20,
)
contract.validate({"summary": "Short summary.", "keywords": ["ai"]})  # passes

# validate a plain string
sentiment_contract = OutputContract(
    allowed_values=["positive", "negative", "neutral"]
)
sentiment_contract.validate("positive")   # passes
sentiment_contract.validate("unknown")    # raises ContractViolation

# validate with regex
contract = OutputContract(regex_patterns=[r"\d{4}"])
contract.validate("Report from 2025")   # passes
contract.validate("No year here")       # raises ContractViolation

Available rules:

Rule Type Description
required_fields list[str] Keys that must exist in a JSON output
max_length int Maximum character length of the output
min_length int Minimum character length of the output
regex_patterns list[str] Patterns the output must match (all must pass)
allowed_values list[str] Output must be one of these exact strings

RunLogger

Log every LLM run to a local SQLite file.

from promptlock import RunLogger

logger = RunLogger("runs.db")

logger.log(
    prompt_name="summarizer",
    version="v1.1",
    model="gpt-4o",
    input="Summarize this: ...",
    output={"summary": "AI is evolving.", "keywords": ["ai"]},
    validated=True,
)

# retrieve runs with filters
logger.get_runs(prompt_name="summarizer", validated="failed", limit=10)

# quick summary of pass/fail counts
logger.summary("summarizer")
# {'passed': 42, 'failed': 3, 'not_checked': 5}

Exceptions

All exceptions inherit from PromptlockError so you can catch broadly or specifically.

from promptlock.exceptions import (
    PromptlockError,       # base exception
    ContractViolation,     # output failed validation
    PromptNotFound,        # prompt name/version not in registry
    TemplateRenderError,   # missing variable during render
)

Project structure

src/promptlock/
├── __init__.py       # public API
├── registry.py       # PromptRegistry
├── template.py       # PromptTemplate
├── contract.py       # OutputContract
├── logger.py         # RunLogger
└── exceptions.py     # custom exceptions

Contributing

Pull requests are welcome. For major changes, please open an issue first.

git clone https://github.com/NorthCommits/Promptlock
cd Promptlock
uv sync
uv run pytest -v

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

promptlock-0.2.0.tar.gz (7.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

promptlock-0.2.0-py3-none-any.whl (9.9 kB view details)

Uploaded Python 3

File details

Details for the file promptlock-0.2.0.tar.gz.

File metadata

  • Download URL: promptlock-0.2.0.tar.gz
  • Upload date:
  • Size: 7.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for promptlock-0.2.0.tar.gz
Algorithm Hash digest
SHA256 0ef90d44b8b918d5c31745f9934e3d122130700a099e548a872f75ba5badbe09
MD5 f304a6f0eda29299280dab84084bced0
BLAKE2b-256 f2d2e99f226493f07e8193ac53e089f8eaa90d87cbcfa2f4cb7465ba5f346daa

See more details on using hashes here.

File details

Details for the file promptlock-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: promptlock-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 9.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for promptlock-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b1213363e33d6ec673795125f62f2070b93db96ec356f1f549c1e053d531d3f7
MD5 19ac99a1ad3716afb2d5e7b5a4e74bb5
BLAKE2b-256 9f7225ce22c9cd3c68bd5a92365270b53f89bcebb088f1cfac6919751955cdcf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page