Skip to main content

Comprehensive prompt management and testing framework for production LLM workflows with automated git versioning

Project description

๐Ÿš€ llmhq-promptops

PyPI version Python Support License: MIT

A comprehensive prompt management and testing framework for production LLM workflows. Built for teams who need reliable, version-controlled prompt development with zero-manual versioning.

โœจ Key Features

  • ๐Ÿ”„ Automated Git Versioning - Zero-manual versioning with git hooks and semantic version detection
  • ๐Ÿ“ Uncommitted Change Testing - Test prompts instantly with :unstaged, :working, :latest references
  • ๐Ÿ Python SDK Integration - pip install llmhq-promptops for seamless app integration
  • ๐Ÿงช Version-Aware Testing - Test different prompt versions with comprehensive validation
  • ๐Ÿ“Š Markdown Reports - Automatic generation of version change documentation
  • โš™๏ธ Git Hook Automation - Pre-commit and post-commit hooks for seamless developer workflow

๐Ÿš€ Quick Start

Installation

pip install llmhq-promptops

Initialize Your Project

# Create a new project with git hooks
promptops init repo

# Check installation
promptops --help

Create Your First Prompt

# Create a new prompt template
promptops create prompt welcome-message

# Test uncommitted changes
promptops test --prompt welcome-message:unstaged

# Check status of all prompts
promptops test status

๐Ÿ“– Usage Examples

Basic Prompt Resolution

from llmhq_promptops import get_prompt

# Smart default (unstaged if different, else working)
prompt = get_prompt("user-onboarding") 

# Specific version references
prompt = get_prompt("user-onboarding:v1.2.1")    # Specific version
prompt = get_prompt("user-onboarding:unstaged")  # Test uncommitted changes
prompt = get_prompt("user-onboarding:working")   # Latest committed (HEAD)
prompt = get_prompt("user-onboarding:latest")    # Alias for working

# With variables
rendered = get_prompt("user-onboarding", {"user_name": "Alice", "plan": "Pro"})
print(rendered)

Using with LLM Frameworks

from llmhq_promptops import get_prompt

# Get versioned prompt for any LLM framework
prompt_text = get_prompt(
    "user-onboarding:working", 
    {"user_name": "John", "plan": "Enterprise"}
)

# Use with OpenAI
import openai
response = openai.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": prompt_text}]
)

# Use with Anthropic
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
    model="claude-3-sonnet-20240229",
    messages=[{"role": "user", "content": prompt_text}]
)

# Use with any other LLM framework
print(f"Prompt ready for LLM: {prompt_text}")

Advanced Usage

from llmhq_promptops import PromptManager

manager = PromptManager()

# Check if prompt has uncommitted changes
if manager.has_uncommitted_changes("user-onboarding"):
    # Test the latest changes
    rendered = manager.get_prompt("user-onboarding:unstaged", {"user_name": "Alice"})
else:
    # Use committed version
    rendered = manager.get_prompt("user-onboarding:working", {"user_name": "Alice"})

# Get prompt differences
diff = manager.get_prompt_diff("user-onboarding", "working", "unstaged")
print(diff)

# List all prompt statuses
statuses = manager.list_prompt_statuses()
for prompt_id, status in statuses.items():
    print(f"{prompt_id}: {status}")

๐Ÿ”ง CLI Commands

Initialization & Setup

# Initialize project with interactive setup
promptops init repo --interactive

# Install git hooks for automatic versioning
promptops hooks install

# Check hook status
promptops hooks status

Testing & Development

# Show status of all prompts
promptops test status

# Test specific version references
promptops test --prompt user-onboarding:unstaged
promptops test --prompt user-onboarding:working  
promptops test --prompt user-onboarding:v1.2.0

# Compare versions
promptops test diff user-onboarding --version1=working --version2=unstaged

# Test with custom variables
promptops test --prompt user-onboarding --variables '{"name": "Alice", "plan": "Pro"}'

Hook Management

# Install automated versioning hooks
promptops hooks install

# Configure hook behavior
promptops hooks configure

# Check installation status
promptops hooks status

# Remove hooks
promptops hooks uninstall

๐Ÿ“ Project Structure

.promptops/
โ”œโ”€โ”€ prompts/          # YAML prompt templates with metadata and auto-versioning
โ”œโ”€โ”€ configs/          # LLM and environment configurations  
โ”œโ”€โ”€ templates/        # Jinja2 template files
โ”œโ”€โ”€ vars/             # Variable definition files
โ”œโ”€โ”€ tests/            # Test datasets (JSON/YAML)
โ”œโ”€โ”€ results/          # Generated test reports (markdown)
โ”œโ”€โ”€ logs/             # LLM call logs and analytics
โ”œโ”€โ”€ reports/          # Auto-generated version change reports
โ””โ”€โ”€ config.yaml       # Git hook configuration

๐Ÿ“‹ Prompt Schema

# .promptops/prompts/user-onboarding.yaml
# Version automatically managed by git hooks
metadata:
  id: user-onboarding
  version: "1.2.0"  # Auto-incremented by pre-commit hook
  description: "User onboarding welcome message"
  tags: ["onboarding", "welcome"]
  
models:
  default: gpt-4-turbo
  supported: [gpt-4-turbo, claude-3-sonnet, llama2-70b]
  
template: |
  Welcome {{ user_name }}!
  Available features:
  {% for feature in features %}
  - {{ feature }}
  {% endfor %}
  
variables:
  user_name: {type: string, required: true}
  features: {type: list, default: ["Browse", "Purchase"]}
  
tests:
  - dataset: .promptops/tests/onboarding-data.json
    metrics: {max_tokens: 150, min_relevance: 0.8}

๐Ÿ”„ Automated Versioning

Semantic Version Rules

  • PATCH (1.0.0 โ†’ 1.0.1): Template content changes only
  • MINOR (1.0.0 โ†’ 1.1.0): New variables added (backward compatible)
  • MAJOR (1.0.0 โ†’ 2.0.0): Required variables removed (breaking change)

Git Hook Workflow

  1. Developer edits prompt โ†’ Changes saved to working directory
  2. Test uncommitted changes โ†’ promptops test --prompt name:unstaged
  3. Git add & commit โ†’ Pre-commit hook automatically:
    • Detects changed prompts
    • Analyzes changes for semantic versioning
    • Updates version numbers in YAML files
    • Re-stages updated files
  4. Commit completes โ†’ Post-commit hook automatically:
    • Creates git tags for new versions
    • Runs validation tests
    • Generates audit logs

Result: Zero manual version management with instant testing capabilities.

๐ŸŒŸ Version References

Reference Description Use Case
prompt-name Smart default (unstaged if different, else working) Development
:unstaged Uncommitted changes in working directory Testing changes
:working Latest committed version (HEAD) Production
:latest Alias for :working Production
:v1.2.3 Specific semantic version Reproducible builds

๐Ÿ› ๏ธ Requirements

  • Python: 3.8+
  • Git: Required for versioning
  • YAML: For prompt template storage
  • Jinja2: For template rendering

๐Ÿ“š Dependencies

  • Core: Typer (CLI), Jinja2 (templating), PyYAML (parsing)
  • Git Integration: GitPython (versioning)
  • Typing: typing_extensions (Python 3.8 compatibility)

๐Ÿค Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

Development Setup

# Clone the repository
git clone https://github.com/your-org/llmhq-promptops.git
cd llmhq-promptops

# Install development dependencies
python -m venv venv
source venv/bin/activate  # or `venv\Scripts\activate` on Windows
pip install -r requirements.txt
pip install -e .

# Run tests
python -m pytest tests/

# Test CLI commands
promptops --help

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • Built with Typer for CLI functionality
  • Inspired by modern DevOps practices for infrastructure as code
  • Designed for reliable prompt management in production applications

๐Ÿ“ž Support


Made with โค๏ธ for the LLM development community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmhq_promptops-0.2.0.tar.gz (37.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmhq_promptops-0.2.0-py3-none-any.whl (39.3 kB view details)

Uploaded Python 3

File details

Details for the file llmhq_promptops-0.2.0.tar.gz.

File metadata

  • Download URL: llmhq_promptops-0.2.0.tar.gz
  • Upload date:
  • Size: 37.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for llmhq_promptops-0.2.0.tar.gz
Algorithm Hash digest
SHA256 66d3574e578e324d196b4e9412ccad681f814fdea724e4eb5fde8446fd821261
MD5 adb7e2ac12577839b3c3d08ce2fbac12
BLAKE2b-256 a692484c7f89be0b15d6a5286c3db1bacff28f46c6c5a2ff1ebec241503d5ebe

See more details on using hashes here.

File details

Details for the file llmhq_promptops-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llmhq_promptops-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 946a1e108b630c9c70be6506339fb1a6949095c82c7d1889f0e4de301d89377c
MD5 62248a064febd2c900729bceca78239b
BLAKE2b-256 c63ca3115cc92978d2a5210211b6ce4c93ed4fb8ab42414af320540f06880eb6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page