Skip to main content

PIT - Prompt Information Tracker: Semantic version control for LLM prompts

Project description

๐Ÿ•ณ๏ธ PIT (Prompt Information Tracker)

PIT Banner

Git for Prompts โ€” Version control that actually understands your LLM prompts

Python 3.11+ License: MIT Tests PyPI


๐Ÿš€ What is PIT?

PIT is a semantic version control system designed specifically for managing LLM prompts. Unlike traditional Git workflows, PIT understands the meaning of your promptsโ€”tracking not just what changed, but why it matters for your AI's behavior.

Stop treating prompts like plain text files. Start versioning them like the critical assets they are.


โœจ Features

๐Ÿ“ Core Version Control

Feature Description
Semantic Versioning Track prompt changes with meaningful version numbers
Automatic Variable Detection Extracts Jinja2 template variables ({{variable}}) on commit
Rich Diff Visualization Compare versions with syntax highlighting
Tagging System Mark important versions (production, stable, experimental)
Instant Checkout Switch between prompt versions instantly
Query Language Search: success_rate >= 0.9, content contains 'be concise'

๐Ÿค Collaboration & Sharing

Feature Description
Shareable Patches Export/import prompt changes as .promptpatch files
Prompt Bundles Package and share prompts with dependencies
Time-Travel Replay Test same input across all versions
Git-Style Hooks Validation and automation (pre-commit, post-checkout)
External Dependencies Depend on prompts from GitHub, local paths, or URLs

๐Ÿ“Š Advanced Analytics

Feature Description
A/B Testing Statistically significant comparisons with scipy-powered t-tests
Performance Tracking Monitor tokens, latency, success rates, costs per version
Regression Testing Automated test suites to catch prompt degradations
Analytics Dashboard Rich terminal charts and HTML reports
Binary Search (Bisect) Find which version broke behavior
Worktrees Multiple prompt contexts without switching
Stash Save WIP with full context

๐Ÿ”’ Security & Quality

Feature Description
Security Scanner OWASP LLM Top 10 compliance checking
Prompt Injection Detection Catch malicious input patterns
PII/API Key Detection Prevent data leakage
Auto-Optimizer AI-powered prompt improvement suggestions
Semantic Merge Categorize changes and detect conflicts

๐Ÿ“ธ Screenshots

๐ŸŽฏ Interactive Menu

PIT Interactive Menu

๐Ÿ“ Version Control

PIT Log Command

๐Ÿ” Rich Diff Visualization

PIT Diff Command

๐Ÿ“Š Analytics Dashboard

PIT Stats Command


๐Ÿš€ Quick Start

Installation

pip install prompt-pit

Or with optional LLM provider support:

# With Anthropic Claude support
pip install prompt-pit[anthropic]

# With OpenAI support  
pip install prompt-pit[openai]

# With everything
pip install prompt-pit[all]

Initialize a Project

# Create a new prompt repository
mkdir my-prompts
cd my-prompts
pit init

Add Your First Prompt

# Add a prompt file
pit add system-prompt.md --name "customer-support" \
  --description "AI assistant for customer support"

Version Control

# Commit a new version
pit commit customer-support --message "Added empathy guidelines"

# View version history
pit log customer-support

# Compare versions
pit diff customer-support --v1 1 --v2 2

# Checkout a specific version
pit checkout customer-support --version 1

# Tag a version
pit tag customer-support --version 2 --tag production

๐Ÿ“š Command Reference

Core Commands

pit init                    # Initialize a new PIT project
pit add <file>              # Add a prompt to track
pit list                    # List all tracked prompts
pit commit <prompt>         # Save a new version
pit log <prompt>            # View version history
pit diff <prompt>           # Compare versions
pit checkout <prompt>       # Switch to a version
pit tag <prompt>            # Manage tags

Advanced Features

# Patches
pit patch create <prompt> v1 v2 --output fix.patch
pit patch apply fix.patch --to <prompt>

# Hooks
pit hooks install pre-commit
pit hooks run pre-commit --prompt <prompt>

# Bundles
pit bundle create my-bundle --prompts "p1,p2" --with-history
pit bundle install my-bundle.bundle

# Replay
pit replay run <prompt> --input "Hello" --versions 1-5
pit replay compare <prompt> --input "Hello" --versions 1,3,5

# Dependencies
pit deps add shared github org/repo/prompts --version v1.0
pit deps install

# Worktrees
pit worktree add ./feature-wt <prompt>@v2

# Stash
pit stash save "WIP: improving tone"
pit stash pop 0

# Bisect
pit bisect start --prompt <prompt> --failing-input "bad query"
pit bisect good v1
pit bisect bad v5

# Testing
pit test create-suite --name "support-tests"
pit test add-case support-tests --name "greeting"
pit test run <prompt> --suite support-tests

# A/B Testing
pit ab-test <prompt> --variant-a 2 --variant-b 3 --sample-size 100

# Security
pit scan <prompt>
pit validate <prompt> --fail-on high

# Optimization
pit optimize analyze <prompt>
pit optimize improve <prompt> --strategy detailed

# Analytics
pit stats show <prompt>
pit stats report <prompt> --output report.html

๐Ÿ“ Project Structure

my-prompts/
โ”œโ”€โ”€ .pit/                   # PIT database and config
โ”‚   โ”œโ”€โ”€ config.yaml         # Project configuration
โ”‚   โ””โ”€โ”€ pit.db              # SQLite database
โ”œโ”€โ”€ prompts/                # Your prompt files
โ”‚   โ””โ”€โ”€ customer-support.md
โ””โ”€โ”€ .pit.yaml              # Optional: global config

โš™๏ธ Configuration

Create .pit.yaml in your project root:

# LLM Provider Configuration
llm:
  provider: anthropic      # anthropic, openai, ollama
  api_key: ${ANTHROPIC_API_KEY}
  model: claude-3-sonnet-20240229

# Default settings
defaults:
  auto_commit: false
  require_tests: true

# Security policies
security:
  max_severity: medium     # fail on medium+ severity issues

# Performance thresholds
performance:
  max_latency_ms: 2000
  min_success_rate: 0.95

๐Ÿ“Š PIT vs Git

Feature Git PIT
Line-by-line diff โœ… โœ…
Semantic understanding โŒ โœ…
Variable tracking โŒ โœ…
Performance metrics โŒ โœ…
A/B testing โŒ โœ…
Security scanning โŒ โœ…
Prompt optimization โŒ โœ…
Shareable patches โŒ โœ…
Git-style hooks โŒ โœ…
Query language โŒ โœ…
Time-travel replay โŒ โœ…
Bundle packaging โŒ โœ…
External dependencies โŒ โœ…
LLM framework integration โŒ โœ…

๐Ÿงช Testing

# Run all tests
pytest

# With coverage
pytest --cov=pit

# Run specific test file
pytest tests/test_core/test_security.py -v

๐Ÿค Contributing

We welcome contributions! Please see our Contributing Guide for details.


๐Ÿ“ License

PIT is released under the MIT License. See LICENSE for details.


๐Ÿ™ Acknowledgments

  • Built with Typer for CLI magic
  • Powered by Rich for beautiful terminal output
  • Inspired by the need for better prompt management in production LLM systems

๐Ÿ“ฌ Support


Made with โค๏ธ for the LLM community

Where prompts go to evolve ๐ŸŒฑ

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

prompt_pit-0.1.0.tar.gz (235.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

prompt_pit-0.1.0-py3-none-any.whl (115.3 kB view details)

Uploaded Python 3

File details

Details for the file prompt_pit-0.1.0.tar.gz.

File metadata

  • Download URL: prompt_pit-0.1.0.tar.gz
  • Upload date:
  • Size: 235.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for prompt_pit-0.1.0.tar.gz
Algorithm Hash digest
SHA256 13a84b0437871a9cb92f2213c80f02412e78cc229609a01be5abfd074e4cef99
MD5 f378f5185b22e47563e5afe95596764c
BLAKE2b-256 1e1dbb5131096571f894fd12f35854acd37898b1bcaa6147cc049364bf22b4e4

See more details on using hashes here.

File details

Details for the file prompt_pit-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: prompt_pit-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 115.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for prompt_pit-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 23fe91e78fac368f6053be3678930cd6827e0fe25b96f1588b79112e62356eec
MD5 ad771bc6c2aea6f59559449de3a38751
BLAKE2b-256 be10f3cc9db25c5581a9bd967f1d04aa38211628e70095f6df7d5dbd031b6f3e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page