CLI tool for managing AI prompt templates with YAML frontmatter
Project description
Prompt Unifier
A Python CLI tool for managing AI prompt templates and coding rules with YAML frontmatter, enabling version control, validation, and deployment workflows.
What is Prompt Unifier?
Managing AI prompts across a team can be chaotic, with templates scattered across local files, wikis, and chat threads. This leads to a lack of version control, standardization, and a single source of truth.
Prompt Unifier solves this by providing a centralized, Git-based workflow to sync, validate, and deploy prompt and rule files to your favorite AI coding assistants.
Features
- ✅ Git-Based Versioning: Sync prompts from one or multiple Git repositories.
- ✅ Centralized Management: Use a single source of truth for all your prompts and rules.
- ✅ Validation: Catch errors in your prompt files before deploying them.
- ✅ SCAFF Method Validation: Analyze prompt quality using the SCAFF methodology (Specific, Contextual, Actionable, Formatted, Focused) with scoring and actionable suggestions.
- ✅ Functional Testing: Test prompt outputs with YAML-based test scenarios and assertions.
- ✅ Easy Deployment: A single command to deploy prompts to supported handlers (like Continue).
- ✅ Structured Organization: Recursively discovers files, preserving your subdirectory structure.
- ✅ Multi-Repository Support: Combine company-wide prompts with team-specific ones seamlessly.
- ✅ Centralized Logging: Global verbose mode (
-v,-vv) and file logging (--log-file) for debugging.
Table of Contents
- Current Handler Support
- How it Works: The Workflow
- Installation
- Quick Start
- Configuration
- Example Repository
- Core Concepts
- Commands
- Development Setup
- Contributing
- License
Current Handler Support
Prompt Unifier currently supports the following AI assistants as handlers:
- Continue - Full support with YAML frontmatter preservation
- Kilo Code - Full support with pure Markdown deployment (no YAML frontmatter)
We plan to integrate with more AI tools and platforms in the future to provide a wider range of deployment options.
How it Works: The Workflow
Prompt Unifier streamlines the management and deployment of your AI prompts and rules through a clear, three-stage workflow:
┌───────────────────────────┐ ┌───────────────────────┐ ┌────────────────────────────┐
│ Remote Git Repositories │ │ Local Storage │ │ AI Tool Configuration │
│ (e.g., GitHub, GitLab) ├─────────► (~/.prompt-unifier/storage) ├───► (e.g., ~/.continue/prompts)│
│ - Global Prompts │ (Sync) │ - Merged Prompts │(Deploy) │ - Prompts & Rules │
│ - Team Rules │ │ - Merged Rules │ │ for Active Use │
└───────────────────────────┘ └───────────────────────┘ └────────────────────────────┘
▲ ▲
│ │
└─────────────────────────────────────┘
(Version Control & Source of Truth)
- Sync from Remote Repositories: You define one or more Git repositories containing your structured prompts and rules. Prompt Unifier fetches these, resolving conflicts with a "last-wins" strategy, and stores them in a centralized local directory. This ensures all your content is version-controlled and readily available.
- Local Storage: All synced prompts and rules reside in a dedicated local storage area (e.g.,
~/.prompt-unifier/storage). This acts as your single source of truth, where content is validated and prepared for deployment. - Deploy to AI Tool Configuration: With a simple command, the tool copies the relevant prompts
and rules from your local storage to the specific configuration directories of your AI assistant
(e.g.,
~/.continue/promptsfor the Continue handler). This makes your standardized and validated content immediately available for use within your AI development environment.
Installation
You can install prompt-unifier directly from PyPI using pip:
pip install prompt-unifier
Prerequisites:
- Python 3.13+
- Git 2.x+
- Poetry (for development)
Quick Start
Get started in 60 seconds:
# 1. Initialize prompt-unifier in your project
# This creates a .prompt-unifier/config.yaml file.
prompt-unifier init
# 2. Sync prompts from a Git repository
# This clones the repo into a centralized local storage.
prompt-unifier sync --repo https://gitlab.com/waewoo/prompt-unifier-data.git
# 3. Deploy prompts to your AI assistant (e.g., Continue)
# This copies the prompts to the .continue/ folder in your project.
prompt-unifier deploy
# 4. Check the status of your synced repositories
prompt-unifier status
Configuration
The prompt-unifier init command creates a .prompt-unifier/config.yaml file. You can configure
the behavior of the tool by editing this file.
Top-Level Parameters
| Parameter | Description | Default |
|---|---|---|
repos |
A list of repository configurations defining where to sync prompts from. See Repository Configuration below. | null |
storage_path |
The local directory where synced prompts and rules are merged and stored. | ~/.prompt-unifier/storage |
deploy_tags |
A list of tags (strings) to filter content during deployment. If set, only content matching these tags will be deployed. | null (deploy all) |
target_handlers |
A list of specific handlers to deploy to by default (e.g., ['continue', 'kilocode']). |
null (deploy to all registered) |
handlers |
Specific configuration for handlers (e.g. base_path). See Handler Configuration below. |
null |
last_sync_timestamp |
[Auto-generated] The timestamp of the last successful sync. Do not edit manually. | null |
repo_metadata |
[Auto-generated] Details about the last synced state (commit, branch) for each repo. Do not edit manually. | null |
Repository Configuration
Each item in the repos list supports the following parameters to control how content is fetched:
| Parameter | Description | Example |
|---|---|---|
url |
(Required) The Git repository URL (HTTPS or SSH). | https://github.com/org/repo.git |
branch |
The specific branch to sync from. If null, uses the repo's default branch. |
develop |
auth_config |
Authentication details (e.g. { "method": "token", "token": "..." }). |
null |
include_patterns |
A list of glob patterns to include. Only matching files will be synced. | ["prompts/python/*"] |
exclude_patterns |
A list of glob patterns to exclude. Applied after includes. | ["**/deprecated/*"] |
Handler Configuration
The handlers section allows you to customize behavior for specific tools (e.g., continue,
kilocode).
| Parameter | Description | Example |
|---|---|---|
base_path |
Overrides the default deployment directory for the handler. Supports environment variables like $HOME. |
"$HOME/.continue-dev" |
Full Configuration Example
Below is a complete .prompt-unifier/config.yaml example demonstrating advanced usage.
Note: Advanced repository settings like auth_config, include_patterns, and
exclude_patterns cannot be set via CLI flags (like --repo). You must edit the config.yaml
file directly to use them.
repos:
# Simple public repository
- url: https://github.com/company/public-prompts.git
branch: main
# Private repository with authentication and filtering
- url: git@gitlab.com:company/internal-prompts.git
branch: develop
auth_config:
method: ssh_key
# Optional: path to specific key if not using default agent
# key_path: ~/.ssh/id_rsa_company
include_patterns:
- "prompts/backend/**"
- "rules/python/*"
exclude_patterns:
- "**/deprecated/**"
- "**/*.tmp"
storage_path: ~/.prompt-unifier/storage
# Deploy only content with these tags
deploy_tags:
- python
- backend
- security
# Deploy to these handlers by default
target_handlers:
- continue
- kilocode
# Handler-specific configuration
handlers:
continue:
base_path: $PWD/.continue
kilocode:
base_path: $HOME/.kilocode
Example Repository
An example repository with prompts and rules is available as a template:
prompt-unifier-data - A collection of DevOps-focused prompts and coding rules ready to use with Prompt Unifier.
This repository includes:
- Structured prompts and rules with proper YAML frontmatter
- GitLab CI pipeline for validation
- Auto-bump release mechanism with semantic versioning
You can fork this repository to create your own prompt collection, or use it directly as a starting point.
Core Concepts
The tool manages two types of files, both using YAML frontmatter.
Prompts
AI prompt templates, typically stored in a prompts/ directory within your Git repository.
Example: prompts/backend/api-design.md
---
title: api-design-review
description: Reviews an API design for best practices.
version: 1.1.0
tags: [python, api, backend]
---
# API Design Review
You are an expert API designer. Please review the following API design...
Rules
Coding standards and best practices, typically stored in a rules/ directory.
Example: rules/testing/pytest-best-practices.md
---
title: pytest-best-practices
description: Best practices for writing tests with pytest.
category: standards
tags: [python, pytest]
version: 1.0.0
---
# Pytest Best Practices
- Use fixtures for setup and teardown.
- Keep tests small and focused.
Commands
The CLI provides several commands to manage your prompts.
Global Options
These options can be used with any command:
--verbose,-v: Increase verbosity level. Use-vfor INFO,-vvfor DEBUG.--log-file TEXT: Write logs to a file for persistent debugging.--version,-V: Show version and exit.--help: Show help message and exit.
# Run any command with verbose output
prompt-unifier -v validate
# Debug mode with file logging
prompt-unifier -vv --log-file debug.log sync --repo https://github.com/example/prompts.git
init
Initialize prompt-unifier configuration in your current directory.
Creates a .prompt-unifier/config.yaml file that tracks which repositories you sync from and your
deployment settings. It also sets up a local storage directory (default:
~/.prompt-unifier/storage/) with prompts/ and rules/ subdirectories and a .gitignore file.
Options:
--storage-path TEXT: Specify a custom storage directory path instead of the default.
sync
Synchronize prompts from Git repositories.
Clones or pulls prompts from one or more Git repositories into a centralized local storage path. You
can specify repositories via the command line or in the config.yaml file. The configuration is
updated with metadata about the synced repositories.
Multi-Repository Sync with Last-Wins Strategy: When multiple repositories are synced, files with identical paths will be overwritten by content from later repositories in the sync order.
Options:
--repo TEXT: Git repository URL (can be specified multiple times).--storage-path TEXT: Override the default storage path for this sync.
Example:
prompt-unifier sync \
--repo https://github.com/company/global-prompts.git \
--repo https://github.com/team/team-prompts.git \
--storage-path /custom/path/storage
list
List available prompts and rules.
Displays a table of all available prompts and rules in your centralized storage. You can filter and sort the content using various options.
Options:
--tool,-t TEXT: Filter content by target tool handler.--tag TEXT: Filter content by a specific tag.--sort,-s TEXT: Sort content by 'name' (default) or 'date'.
Examples:
# List all content
prompt-unifier list
# List prompts tagged "python" with verbose output
prompt-unifier -v list --tag python
# List content sorted by date
prompt-unifier list --sort date
status
Check sync status.
Displays the synchronization status, including when each repository was last synced and whether new commits are available on the remote. It also checks the deployment status of prompts and rules against configured handlers.
validate
Validate prompt and rule files.
Checks prompt and rule files for correct YAML frontmatter, required fields, and valid syntax. Also includes SCAFF methodology validation (Specific, Contextual, Actionable, Formatted, Focused) to ensure prompt quality. You can validate the central storage or a local directory of files.
Options:
[DIRECTORY]: Optional. Directory or file to validate (defaults to synchronized storage).--json: Output validation results in JSON format.--type TEXT,-t TEXT: Specify content type to validate: 'all', 'prompts', or 'rules' [default: 'all'].--scaff/--no-scaff: Enable/disable SCAFF methodology validation [default: enabled].
SCAFF Validation:
By default, the validate command analyzes your prompts against the SCAFF methodology:
- Specific: Clear requirements with measurable goals
- Contextual: Background information and problem context
- Actionable: Concrete action steps and instructions
- Formatted: Proper Markdown structure and organization
- Focused: Single topic with appropriate length (500-2000 words optimal)
Each prompt receives a score (0-100) and actionable suggestions for improvement. SCAFF validation is
non-blocking (warnings only) and can be disabled with --no-scaff.
Examples:
# Validate with SCAFF checks (default)
prompt-unifier validate
# Validate without SCAFF checks
prompt-unifier validate --no-scaff
# Validate a local directory with SCAFF
prompt-unifier validate ./my-prompts/
# Get JSON output with SCAFF scores
prompt-unifier validate --json
test
Run functional tests for prompt files using AI.
Discovers .test.yaml files and executes their test scenarios using the configured AI provider.
This allows you to verify that your prompts perform as intended with real LLM models.
Options:
[TARGETS...]: Optional. List of files or directories to test (defaults to synchronized storage).--provider,-p: Optional. AI provider/model to use (e.g.,gpt-4o,mistral/codestral-latest). Overrides any configuration inconfig.yamlor environment variables.
Functional Testing with AI:
The test command enables functional testing of prompts by executing them with real AI providers
and validating the responses.
Setup:
- Configure API keys in
.env(OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.) - Create a
.test.yamlfile alongside your prompt.
Environment Variables:
DEFAULT_LLM_MODEL: Set the default AI model to use (e.g.,mistral/codestral-latest) if not specified in the test file.
Example Test File:
scenarios:
- description: "Test code generation"
input: "Write a function..."
expect:
- type: contains
value: "def function"
Examples:
# Run tests for all prompts in storage
prompt-unifier test
# Run tests for specific prompt files
prompt-unifier test prompts/code-review.md prompts/refactor.md
# Run tests for an entire directory
prompt-unifier test prompts/python/
# Run tests with verbose output for detailed AI execution logs
prompt-unifier -v test prompts/backend/
deploy
Deploy prompts and rules to tool handlers.
Copies the synchronized prompts and rules to the configuration directories of your AI coding assistants.
- Default Handler:
continue - Supported Handlers:
continue,kilocode - Default Destinations:
- Continue:
./.continue/prompts/and./.continue/rules/ - Kilo Code:
./.kilocode/workflows/and./.kilocode/rules/
- Continue:
Options:
--name TEXT: Deploy only a specific prompt or rule by name.--tags TEXT: Filter content to deploy by tags (comma-separated).--handlers TEXT: Specify target handlers for deployment (comma-separated). Supported: 'continue', 'kilocode'.--base-path PATH: Custom base path for handler deployment (overrides config.yaml).--clean: Remove orphaned prompts/rules in destination that are not in your source (creates backups before removal).--dry-run: Preview deployment without executing any file operations.
Examples:
# Deploy only prompts tagged "python", with cleanup, and preview changes
prompt-unifier deploy --tags python --clean --dry-run
# Deploy to Kilo Code handler
prompt-unifier deploy --handlers kilocode
# Deploy to both Continue and Kilo Code
prompt-unifier deploy --handlers continue,kilocode
Development Setup
If you want to contribute to the development of prompt-unifier:
-
Clone the repository:
git clone https://gitlab.com/waewoo/prompt-unifier.git cd prompt-unifier
-
Install dependencies using Poetry:
poetry install -
Install pre-commit hooks: This will run linters and formatters automatically before each commit.
poetry run pre-commit install
Running Checks
This project uses a Makefile-driven architecture that serves as the single entry point for all development and CI/CD operations. Commands are organized into functional groups:
Environment Setup
make env-install: Install dependencies and git hooks (first-time setup)make env-update: Update all dependencies (refreshing lock file)make env-clean: Cleanup temporary files and caches
Application Development
make app-run ARGS="--version": Run the CLI with argumentsmake app-lint: Run static analysis (lint, format, types via pre-commit)make app-test: Run unit tests with coveragemake app-check-all: Run FULL validation (lint + test + CI check)
CI/CD Simulation
make ci-pipeline: Run FULL GitLab pipeline locally in Dockermake ci-job JOB=<name>: Run specific GitLab CI job locallymake ci-validate: Validate.gitlab-ci.ymlsyntaxmake ci-list: List all available CI jobsmake ci-image-build: Build custom CI base Docker imagemake ci-image-push: Push CI base image to registry
Security Scanning
make sec-all: Run ALL security scans (code + secrets + deps)make sec-code: SAST scan with Banditmake sec-secrets: Secret detectionmake sec-deps: Dependency vulnerability check with pip-audit
Package & Release
make pkg-build: Build wheel/sdist packagesmake pkg-changelog: Generate changelogmake pkg-notes VERSION=x.y.z: Generate release notesmake pkg-publish VERSION_BUMP=<patch|minor|major>: Create release and push tagsmake pkg-prepare-release: Auto-bump version (CI only)make pkg-publish-package: Upload to PyPI (CI only)
Documentation
make docs-install: Install documentation dependenciesmake docs-live PORT=8000: Serve docs locally with live reloadmake docs-build: Build static documentation site
Run make help to see all available targets with descriptions.
🤖 AI Code Review
Automated reviews of Merge Requests via pr-agent.
🏠 Local Usage
Prerequisites
- uv: Install uv (for isolated execution)
- Token GitLab:
apiscope - LLM API Key: Mistral, Gemini, etc.
Setup
# Copy .env.example to .env and fill in tokens
cp .env.example .env
Run a review
make review MR_URL=https://gitlab.com/waewoo/prompt-unifier/-/merge_requests/123
(This automatically runs pr-agent in an isolated environment via uvx)
Change LLM provider
Edit .pr_agent.toml:
[config]
model = "gemini/gemini-3-flash-preview" # or "anthropic/claude-3-5-sonnet"
git_provider = "gitlab"
[gitlab]
url = "https://gitlab.com"
[github]
# SECURITY: Non-functional placeholder (not a real token)
# Required due to pr-agent bug - DO NOT replace with real GitHub token
user_token = "ghp_PLACEHOLDER_NotARealToken_DoNotReplace" # pragma: allowlist secret
Configure in .env:
# Required
GITLAB_TOKEN=your-gitlab-token
# LLM Model (change this line to switch providers)
PR_AGENT_MODEL=mistral/mistral-large-latest
# LLM API Keys (add the key for your chosen provider)
MISTRAL_API_KEY=your-key # For mistral/* models
# GEMINI_API_KEY=your-key # For gemini/* models
# ANTHROPIC_API_KEY=your-key # For anthropic/* models
# OPENAI_API_KEY=your-key # For openai/* models
To change LLM provider, just edit PR_AGENT_MODEL in .env:
# Switch to Gemini
PR_AGENT_MODEL=gemini/gemini-3-flash-preview
# Or use Claude
PR_AGENT_MODEL=anthropic/claude-3-5-sonnet
No need to modify the Makefile or .pr_agent.toml !
☁️ GitLab CI Usage
Configuration
Add the following variables in GitLab: Settings → CI/CD → Variables
Required variables (✅ Masked, ❌ NOT Protected):
| Variable | Description | Example |
|---|---|---|
GITLAB_TOKEN |
Token with api scope |
glpat-xxx... |
MISTRAL_API_KEY |
Your Mistral AI key | xxx... |
PR_AGENT_MODEL |
LLM model to use | mistral/devstral-latest |
Optional variables:
| Variable | Description | Default |
|---|---|---|
PR_AGENT_MAX_TOKENS |
Max tokens for custom models | 256000 |
PR_AGENT_GIT_PROVIDER |
Git provider | gitlab |
AI_REVIEW_ENABLED |
Set to false to disable |
true |
⚠️ Important: Variables must be Masked but NOT Protected. Protected variables are only available on protected branches (main/master). Feature branches need access to these variables.
Available Jobs
The CI provides 3 manual jobs in the pull-request stage (run in parallel):
pr-review: Full AI code review with suggestionspr-improve: Code improvement suggestionspr-describe: Generate MR description
To run: Go to your MR → Pipelines → Click ▶️ on the job you want.
📋 Supported Models (Examples)
| Provider | Model | Variable |
|---|---|---|
| Mistral | mistral-large-latest |
MISTRAL_API_KEY |
| Gemini | gemini-3-flash-preview |
GEMINI_API_KEY |
| Claude | claude-3-5-sonnet |
ANTHROPIC_API_KEY |
🤝 Contributing
Contributions are welcome! Please see CONTRIBUTING.md for details.
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file prompt_unifier-2.6.0.tar.gz.
File metadata
- Download URL: prompt_unifier-2.6.0.tar.gz
- Upload date:
- Size: 107.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.1 CPython/3.13.11 Linux/5.15.154+
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d08dcee484b501957af113d0288e405e5c9dc8365b71e1d6dc724cd14a1cd2fb
|
|
| MD5 |
f56c555f611c6559263012fd77c669a9
|
|
| BLAKE2b-256 |
0d2e3b718eddc823ef20f4880c8a024963b7e6defeda414c7325f98a6f8a12ce
|
File details
Details for the file prompt_unifier-2.6.0-py3-none-any.whl.
File metadata
- Download URL: prompt_unifier-2.6.0-py3-none-any.whl
- Upload date:
- Size: 123.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.1 CPython/3.13.11 Linux/5.15.154+
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5e22bfde53651df8ab9ff757f5d5dc21510824b847d75d6626d5eaa911c61b32
|
|
| MD5 |
dc061b9b0ae6ec822cc6b9f2613fcd4c
|
|
| BLAKE2b-256 |
e69c63a7cd12fe57b324d1b3c484a967b4b09eb9b5b0c2de2282459efa86bc8e
|