Skip to main content

Prompt Assembly Language - A framework for managing LLM prompts as versioned, composable software artifacts

Project description

PAL - Prompt Assembly Language

Python License CI codecov

PAL (Prompt Assembly Language) is a framework for managing LLM prompts as versioned, composable software artifacts. It treats prompt engineering with the same rigor as software engineering, focusing on modularity, versioning, and testability.

⚡ Features

  • Modular Components: Break prompts into reusable, versioned components
  • Template System: Powerful Jinja2-based templating with variable injection
  • Dependency Management: Import and compose components from local files or URLs
  • LLM Integration: Built-in support for OpenAI, Anthropic, and custom providers
  • Evaluation Framework: Comprehensive testing system for prompt validation
  • Rich CLI: Beautiful command-line interface with syntax highlighting
  • Flexible Extensions: Use .pal/.pal.lib or .yml/.lib.yml extensions
  • Type Safety: Full Pydantic v2 validation for all schemas
  • Observability: Structured logging and execution tracking

📦 Installation

# Install with uv (recommended)
uv add pal-py

# Or with pip
pip install pal-py

📁 Project Structure

my_pal_project/
├── prompts/
│   ├── classify_intent.pal     # or .yml for better IDE support
│   └── code_review.pal
├── libraries/
│   ├── behavioral_traits.pal.lib    # or .lib.yml
│   ├── reasoning_strategies.pal.lib
│   └── output_formats.pal.lib
└── evaluation/
    └── classify_intent.eval.yaml

🚀 Quick Start

1. Create a Component Library

# libraries/traits.pal.lib
pal_version: "1.0"
library_id: "com.example.traits"
version: "1.0.0"
description: "Behavioral traits for AI agents"
type: "trait"

components:
  - name: "helpful_assistant"
    description: "A helpful and polite assistant"
    content: |
      You are a helpful, harmless, and honest AI assistant. You provide
      accurate information while being respectful and considerate.

2. Create a Prompt Assembly

# prompts/classify_intent.pal
pal_version: "1.0"
id: "classify-user-intent"
version: "1.0.0"
description: "Classifies user queries into intent categories"

imports:
  traits: "./libraries/traits.pal.lib"

variables:
  - name: "user_query"
    type: "string"
    description: "The user's input query"
  - name: "available_intents"
    type: "list"
    description: "List of available intent categories"

composition:
  - "{{ traits.helpful_assistant }}"
  - ""
  - "## Task"
  - "Classify this user query into one of the available intents:"
  - ""
  - "**Available Intents:**"
  - "{% for intent in available_intents %}"
  - "- {{ intent.name }}: {{ intent.description }}"
  - "{% endfor %}"
  - ""
  - "**User Query:** {{ user_query }}"

3. Use the CLI

# Compile a prompt
pal compile prompts/classify_intent.pal --vars '{"user_query": "Take me to google.com", "available_intents": [{"name": "navigate", "description": "Go to URL"}]}'

# Execute with an LLM
pal execute prompts/classify_intent.pal --model gpt-4 --provider openai --vars '{"user_query": "Take me to google.com", "available_intents": [{"name": "navigate", "description": "Go to URL"}]}'

# Validate PAL files
pal validate prompts/ --recursive

# Run evaluation tests
pal evaluate evaluation/classify_intent.eval.yaml

4. Use Programmatically

import asyncio
from pal import PromptCompiler, PromptExecutor, MockLLMClient

async def main():
    # Set up components
    compiler = PromptCompiler()
    llm_client = MockLLMClient("Mock response")
    executor = PromptExecutor(llm_client)

    # Compile prompt
    variables = {
        "user_query": "What's the weather?",
        "available_intents": [{"name": "search", "description": "Search for info"}]
    }

    compiled_prompt = await compiler.compile_from_file(
        "prompts/classify_intent.pal",
        variables
    )

    print("Compiled Prompt:", compiled_prompt)

asyncio.run(main())

🧪 Evaluation System

Create test suites to validate your prompts:

# evaluation/classify_intent.eval.yaml
pal_version: "1.0"
prompt_id: "classify-user-intent"
target_version: "1.0.0"

test_cases:
  - name: "navigation_test"
    variables:
      user_query: "Go to google.com"
      available_intents: [{ "name": "navigate", "description": "Visit URL" }]
    assertions:
      - type: "json_valid"
      - type: "contains"
        config:
          text: "navigate"

🏗️ Architecture

PAL follows modern software engineering principles:

  • Schema Validation: All files are validated against strict Pydantic schemas
  • Dependency Resolution: Automatic import resolution with circular dependency detection
  • Template Engine: Jinja2 for powerful variable interpolation and logic
  • Observability: Structured logging with execution metrics and cost tracking
  • Type Safety: Full type hints and runtime validation

🛠️ CLI Commands

Command Description
pal compile Compile a PAL file into a prompt string
pal execute Compile and execute a prompt with an LLM
pal validate Validate PAL files for syntax and semantic errors
pal evaluate Run evaluation tests against prompts
pal info Show detailed information about PAL files

🧩 Component Types

PAL supports different types of reusable components:

  • persona: AI personality and role definitions
  • task: Specific instructions or objectives
  • context: Background information and knowledge
  • rules: Constraints and guidelines
  • examples: Few-shot learning examples
  • output_schema: Output format specifications
  • reasoning: Thinking strategies and methodologies
  • trait: Behavioral characteristics
  • note: Documentation and comments

🤝 Contributing

We welcome contributions! Please see our Contributing Guide for details.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🆘 Support

🗺️ Roadmap

  • PAL Registry: Centralized repository for sharing components
  • Visual Builder: Drag-and-drop prompt composition interface
  • IDE Extensions: VS Code and other editor integrations

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pal_framework-0.0.2.tar.gz (111.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pal_framework-0.0.2-py3-none-any.whl (28.9 kB view details)

Uploaded Python 3

File details

Details for the file pal_framework-0.0.2.tar.gz.

File metadata

  • Download URL: pal_framework-0.0.2.tar.gz
  • Upload date:
  • Size: 111.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for pal_framework-0.0.2.tar.gz
Algorithm Hash digest
SHA256 bc850aaaaf1c2bf5881f5bd597fd67577deed73a23ca80fc28d5e572b19008b3
MD5 9f3e4ae5b0fc29c839610353af2bc6a6
BLAKE2b-256 d468a646027052f6ba1a474fa6ba2c1780e42bbd58041db4102b12ed25acbbb8

See more details on using hashes here.

File details

Details for the file pal_framework-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: pal_framework-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 28.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for pal_framework-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 94d760f9d68784c86eaa441289c9413f825b7f9dea24a80b4d9d814ba20d27ef
MD5 acabf2900ea74bed1283611ae479ce88
BLAKE2b-256 b0742cf6ce2c35331334cbffdced30256c7bae049716f171feea99f90764ae3a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page