Simple prompt versioning for AI applications. Store prompts in JSON, track usage, no hardcoding.
Project description
unversion
Simple prompt versioning for AI applications. Store prompts in JSON, track usage, no hardcoding.
Why unversion?
- No hardcoded prompts - Store all prompts in a single JSON file
- Easy editing - Update prompts without touching code
- Usage tracking - Know which prompts are used and how often
- Zero dependencies - Core library has no required dependencies
- CLI included - Manage prompts from the command line
Installation
pip install unversion
With observability (Langfuse integration):
pip install unversion[observer]
Quick Start
1. Create your prompts file
Create prompts/bundled.json:
{
"version": "1.0",
"prompts": {
"greeting": {
"text": "Hello {name}! Welcome to {app_name}.",
"variables": ["name", "app_name"],
"source": "manual",
"notes": "Simple greeting prompt"
},
"analysis.sentiment": {
"text": "Analyze the sentiment of the following text:\n\n{text}\n\nRespond with: positive, negative, or neutral.",
"variables": ["text"],
"source": "manual",
"notes": "Basic sentiment analysis prompt"
}
}
}
2. Use prompts in your code
from unversion import init_store, get_prompt
# Initialize with path to your prompts file
init_store("prompts/bundled.json")
# Get a prompt (returns empty string if not found)
prompt = get_prompt("greeting", name="Alice", app_name="MyApp")
# "Hello Alice! Welcome to MyApp."
# Get prompt without formatting
raw = get_prompt("greeting")
# "Hello {name}! Welcome to {app_name}."
# List all prompts
from unversion import list_prompts
keys = list_prompts()
# ["greeting", "analysis.sentiment"]
3. Track usage (optional)
from unversion import log_usage, get_stats
# Log when a prompt is used
log_usage("greeting", stage="chat", model="gpt-4")
# Get usage statistics
stats = get_stats("greeting")
print(f"Used {stats['total_usage']} times")
CLI Usage
# List all prompts
unversion list
# List with filtering
unversion list --filter analysis
# View a prompt
unversion view greeting
# Search prompts
unversion search "sentiment"
# Show statistics
unversion stats
# Validate prompts file
unversion validate prompts/bundled.json
Prompt File Format
{
"version": "1.0",
"prompts": {
"prompt_key": {
"text": "The prompt text with {variables}",
"variables": ["variables"],
"source": "where it came from",
"notes": "documentation"
}
}
}
Fields
| Field | Required | Description |
|---|---|---|
text |
Yes | The prompt template text |
variables |
No | List of variable names used in the template |
source |
No | Origin of the prompt (manual, imported, etc.) |
notes |
No | Documentation about the prompt |
Naming Convention
Use dot notation for hierarchical organization:
category.subcategory.name
Examples:
chat.greetinganalysis.sentimentgeneration.video.introsafety.content_filter
API Reference
Store Functions
from unversion import (
init_store, # Initialize with prompts file path
get_store, # Get the store instance
get_prompt, # Get and optionally format a prompt
list_prompts, # List all prompt keys
reload_prompts, # Reload prompts from file
has_prompt, # Check if a prompt exists
)
Observer Functions
from unversion import (
log_usage, # Log prompt usage
get_stats, # Get usage stats for a prompt
get_recent_logs, # Get recent usage logs
get_top_prompts, # Get most used prompts
)
Types
from unversion import Prompt, PromptStore, UsageLog
Examples
Multi-file Organization
# config.py
from unversion import init_store
init_store("prompts/bundled.json")
# chat.py
from unversion import get_prompt
def greet_user(name: str) -> str:
prompt = get_prompt("chat.greeting", name=name)
return call_llm(prompt)
With Observability
from unversion import get_prompt, log_usage
import time
def analyze_sentiment(text: str) -> str:
prompt = get_prompt("analysis.sentiment", text=text)
start = time.time()
result = call_llm(prompt)
latency = (time.time() - start) * 1000
log_usage(
"analysis.sentiment",
stage="analysis",
model="gpt-4",
latency_ms=latency,
success=True,
)
return result
Category Filtering
from unversion import list_prompts, get_prompt
# Get all analysis prompts
analysis_keys = [k for k in list_prompts() if k.startswith("analysis.")]
# Load all safety prompts
safety_prompts = {
key: get_prompt(key)
for key in list_prompts()
if key.startswith("safety.")
}
Integration with Langfuse
If you have Langfuse configured, usage logs can be sent automatically:
export LANGFUSE_PUBLIC_KEY=pk-...
export LANGFUSE_SECRET_KEY=sk-...
pip install unversion[observer]
from unversion import log_usage
# Logs will be sent to Langfuse automatically
log_usage("my_prompt", stage="generation", model="claude-3")
Development
# Clone the repo
git clone https://github.com/unreelai/unversion
cd unversion
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black src tests
ruff check src tests
License
MIT License - see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file unversion-0.1.0.tar.gz.
File metadata
- Download URL: unversion-0.1.0.tar.gz
- Upload date:
- Size: 13.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7f13f929102bd62888cba1b7b4d2415a5df52d31d4109c058470824d88b191fb
|
|
| MD5 |
227921547524642adab161a6fb040442
|
|
| BLAKE2b-256 |
97a558a22cd248cee89ee101f18d1056ac90c10b74e13478037e623a9ccad906
|
File details
Details for the file unversion-0.1.0-py3-none-any.whl.
File metadata
- Download URL: unversion-0.1.0-py3-none-any.whl
- Upload date:
- Size: 13.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
01bdd670e114a134f419ac175a76e12fbe5a8782bc5b3a1151ce0f2169c4febf
|
|
| MD5 |
f39c53b32df933ab4cdadf15cd19ed1a
|
|
| BLAKE2b-256 |
34bb84f0b669a6bb9cea1640b0ed64e2e9bf22ff5f7fea492cda2fd83cb78b1e
|