LLM prompts as versioned YAML files — git-trackable, renderable, and diffable. Works with OpenAI, Anthropic, LiteLLM, and any LLM client.
Project description
promptfile
LLM prompts as versioned YAML files — git-trackable, renderable, and diffable.
No servers. No SaaS. No database. Works with OpenAI, Anthropic, LiteLLM, and any string-based API.
Why promptfile?
Most teams manage prompts as hardcoded strings, littered through source files. When a prompt changes, there's no diff, no version history, no review process — just a mystery regression.
promptfile treats your prompts like code: plain YAML files, committed to git, reviewed in PRs, and rendered at runtime.
| Problem | Without promptfile | With promptfile |
|---|---|---|
| Prompt lives in source code | Hard to find & change | Separate .prompt.yaml file |
| No version history | Git doesn't know what changed | Full git diff, blame, history |
| Variables scattered everywhere | f"Summarise {article}" strings |
{{article}} in YAML |
| Can't review prompt changes | Buried in code diffs | Clean YAML diff in PR |
| Works only with one LLM SDK | Tied to one client | Plain dict output, any SDK |
Install
# Core install
pip install promptfile
# With YAML parsing (recommended — you almost always need this)
pip install "promptfile[yaml]"
# Everything
pip install "promptfile[all]"
Quick Start
1. Write a prompt file (prompts/summarise.prompt.yaml):
name: summarise
version: "1.0.0"
description: Summarise an article in a given tone.
system: You are a professional editor. Summarise text clearly and accurately.
user: "Summarise the following article in a {{tone}} tone:\n\n{{article}}"
2. Load and render it in Python:
from promptfile import load
prompt = load("prompts/summarise.prompt.yaml")
# Render variables
rendered = prompt.render(
tone="concise",
article="Today's top story is about AI...",
)
# Use with any LLM client
messages = rendered.to_messages()
# → [{'role': 'system', 'content': '...'}, {'role': 'user', 'content': '...'}]
3. Pass to your LLM:
# OpenAI
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=rendered.to_messages(),
)
# Anthropic
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-opus-4-6",
max_tokens=1024,
system=rendered.messages[0].content,
messages=rendered.to_messages()[1:],
)
# LiteLLM
import litellm
response = litellm.completion(
model="gpt-4o",
messages=rendered.to_messages(),
)
That's it. Your prompt is now a versioned, reviewable, git-trackable artifact.
YAML Formats
Shorthand (quick authoring)
name: my-prompt
version: "1.0.0"
description: Optional human-readable description.
system: You are a helpful assistant.
user: "Answer this question: {{question}}"
Full messages list (multi-turn / precise control)
name: customer-support
version: "2.1.0"
description: Customer support agent for SaaS product.
messages:
- role: system
content: |
You are a friendly customer support agent for Acme Corp.
Always be empathetic and solution-focused.
- role: user
content: "{{customer_message}}"
With metadata (model hints, tags, defaults)
name: code-review
version: "1.3.0"
description: Reviews a code diff and suggests improvements.
system: You are an expert software engineer.
user: "Review this {{language}} code:\n\n{{code}}"
# Metadata: anything extra is stored in prompt.metadata
model: gpt-4o
temperature: 0.2
tags:
- engineering
- code-quality
defaults:
language: Python
All Features
load(path) — Load a single prompt file
from promptfile import load
prompt = load("prompts/summarise.prompt.yaml")
print(prompt.name) # "summarise"
print(prompt.version) # "1.0.0"
print(prompt.variables) # {"tone", "article"}
print(prompt.metadata) # {"model": "gpt-4o", ...}
load_dir(directory) — Load all prompts in a folder
from promptfile import load_dir
prompts = load_dir("prompts/", recursive=True)
# → {"summarise": Prompt(...), "code-review": Prompt(...), ...}
summarise = prompts["summarise"]
render(prompt, **kwargs) — Fill template variables
from promptfile import load, render
prompt = load("prompts/summarise.prompt.yaml")
rendered = render(prompt, tone="formal", article="The article text...")
# Or call directly on the prompt object:
rendered = prompt.render(tone="formal", article="The article text...")
Missing variables raise a clear KeyError:
KeyError: Prompt 'summarise' requires variables: ['article', 'tone']
Provided: []
registry — Central prompt store
from promptfile import registry
# Load at startup
registry.load_dir("prompts/")
# Access anywhere in your app
summarise = registry.get("summarise")
rendered = summarise.render(tone="concise", article="...")
# List all registered prompts
print(registry.list())
# → ["code-review", "customer-support", "summarise"]
diff_prompts(old, new) — Human-readable prompt diff
from promptfile import load, diff_prompts
old = load("prompts/v1/summarise.prompt.yaml")
new = load("prompts/v2/summarise.prompt.yaml")
print(diff_prompts(old, new, color=True))
--- summarise v1.0.0
+++ summarise v2.0.0
@@ -3,4 +3,4 @@
- role: user
- Summarise the following article in a {{tone}} tone:
+ Summarise the following article in a {{tone}} tone. Be {{length}}.
validate(prompt) — Catch authoring mistakes
from promptfile import load
from promptfile._validate import validate
prompt = load("prompts/my-prompt.yaml")
result = validate(prompt)
if not result.valid:
for error in result.errors:
print(f"ERROR: {error}")
for warning in result.warnings:
print(f"WARN: {warning}")
CLI
# Show a prompt
promptfile show prompts/summarise.prompt.yaml
# Validate all prompts in a directory
promptfile validate prompts/ --recursive
# Diff two versions
promptfile diff prompts/v1/summarise.yaml prompts/v2/summarise.yaml
# List all prompts
promptfile list prompts/ --recursive
# Render a prompt with variables (for manual testing)
promptfile render prompts/summarise.prompt.yaml \
--var tone=concise \
--var article="The quick brown fox."
Git Integration
Because prompt files are plain YAML, they integrate naturally with git:
# See what changed in your prompts
git diff HEAD~1 prompts/
# Blame a specific line
git blame prompts/summarise.prompt.yaml
# Review prompt changes in a PR — it's just YAML
Recommended project layout:
my-project/
├── prompts/
│ ├── summarise.prompt.yaml
│ ├── code-review.prompt.yaml
│ ├── customer-support.prompt.yaml
│ └── v2/
│ └── summarise.prompt.yaml ← in-progress update
├── src/
│ └── ...
└── tests/
└── test_prompts.py
Testing Prompts
Use promptfile with genassert for semantic prompt testing:
import pytest
from promptfile import load
from genassert import assert_intent, assert_no_hallucination
PRODUCT_FACTS = [
"The price is $49/month",
"There is a 14-day free trial",
]
def test_support_prompt():
prompt = load("prompts/customer-support.prompt.yaml")
rendered = prompt.render(customer_message="How much does it cost?")
# Call your LLM here...
response = my_llm(rendered.to_messages())
assert_intent(response, "pricing information")
assert_no_hallucination(response, PRODUCT_FACTS)
CI Integration
# .github/workflows/prompts.yml
name: Validate Prompts
on: [push, pull_request]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- run: pip install "promptfile[yaml]"
- run: promptfile validate prompts/ --recursive
Framework Compatibility
promptfile outputs plain Python dicts — it works with every LLM client:
messages = rendered.to_messages()
# → [{"role": "system", "content": "..."}, {"role": "user", "content": "..."}]
# OpenAI
openai_client.chat.completions.create(model="gpt-4o", messages=messages)
# Anthropic
anthropic_client.messages.create(model="claude-opus-4-6", messages=messages[1:], system=messages[0]["content"])
# LiteLLM
litellm.completion(model="gpt-4o", messages=messages)
# LangChain
from langchain_core.messages import SystemMessage, HumanMessage
lc_messages = [SystemMessage(m["content"]) if m["role"] == "system" else HumanMessage(m["content"]) for m in messages]
Configuration
No configuration files needed. Optional environment variables:
| Variable | Default | Description |
|---|---|---|
PROMPTFILE_DIR |
— | Default prompts directory for registry.load_dir() |
License
MIT © promptfile contributors
Related Projects
- genassert — pytest-native semantic testing for LLM apps
- PyYAML — YAML parsing
- LiteLLM — unified LLM client
promptfile is the missing piece between your prompts and your version control.
Stop managing prompts as strings. Start managing them as files.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file promptfiles-0.1.0.tar.gz.
File metadata
- Download URL: promptfiles-0.1.0.tar.gz
- Upload date:
- Size: 16.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2357b0f54f97906a91be4e2c7d2f979fddd5e11c0d64e7e7f9c1d50c6ddfa733
|
|
| MD5 |
82687d1dc6f4bd599fb83b7cccedfe69
|
|
| BLAKE2b-256 |
717093415e5d06895e18c44e84d8d5d8a5827148552e7c9931a33d0088fbcb0a
|
File details
Details for the file promptfiles-0.1.0-py3-none-any.whl.
File metadata
- Download URL: promptfiles-0.1.0-py3-none-any.whl
- Upload date:
- Size: 16.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
154783e0cdeddaacbef1641128682363e5084250ad5cbc6da120b58fb091c59e
|
|
| MD5 |
efb0086e0be571a5ca0ae44f3315e300
|
|
| BLAKE2b-256 |
fa294cd441876c196bbb5226fe8c218fcc8c2d9232d9b1868f15f4a0ccee1e86
|