Skip to main content

Prompt Assembly Language - A framework for managing LLM prompts as versioned, composable software artifacts

Project description

PAL - Prompt Assembly Language

Python License CI codecov

PAL (Prompt Assembly Language) is a framework for managing LLM prompts as versioned, composable software artifacts. It treats prompt engineering with the same rigor as software engineering, focusing on modularity, versioning, and testability.

See also the NodeJS version of PAL (WIP)

⚡ Features

  • Modular Components: Break prompts into reusable, versioned components
  • Template System: Powerful Jinja2-based templating with variable injection
  • Dependency Management: Import and compose components from local files or URLs
  • LLM Integration: Built-in support for OpenAI, Anthropic, and custom providers
  • Evaluation Framework: Comprehensive testing system for prompt validation
  • Rich CLI: Beautiful command-line interface with syntax highlighting
  • Flexible Extensions: Use .pal/.pal.lib or .yml/.lib.yml extensions
  • Type Safety: Full Pydantic v2 validation for all schemas
  • Observability: Structured logging and execution tracking

📦 Installation

# Install with uv (recommended)
uv add pal-framework

# Or with pip
pip install pal-framework

📁 Project Structure

my_pal_project/
├── prompts/
│   ├── classify_intent.pal     # or .yml for better IDE support
│   └── code_review.pal
├── libraries/
│   ├── behavioral_traits.pal.lib    # or .lib.yml
│   ├── reasoning_strategies.pal.lib
│   └── output_formats.pal.lib
└── evaluation/
    └── classify_intent.eval.yaml

🚀 Quick Start

1. Create a Component Library

For a detailed guide, read this.

# libraries/traits.pal.lib
pal_version: "1.0"
library_id: "com.example.traits"
version: "1.0.0"
description: "Behavioral traits for AI agents"
type: "trait"

components:
  - name: "helpful_assistant"
    description: "A helpful and polite assistant"
    content: |
      You are a helpful, harmless, and honest AI assistant. You provide
      accurate information while being respectful and considerate.

2. Create a Prompt Assembly

For a detailed guide, read this.

# prompts/classify_intent.pal
pal_version: "1.0"
id: "classify-user-intent"
version: "1.0.0"
description: "Classifies user queries into intent categories"

imports:
  traits: "./libraries/traits.pal.lib"

variables:
  - name: "user_query"
    type: "string"
    description: "The user's input query"
  - name: "available_intents"
    type: "list"
    description: "List of available intent categories"

composition:
  - "{{ traits.helpful_assistant }}"
  - ""
  - "## Task"
  - "Classify this user query into one of the available intents:"
  - ""
  - "**Available Intents:**"
  - "{% for intent in available_intents %}"
  - "- {{ intent.name }}: {{ intent.description }}"
  - "{% endfor %}"
  - ""
  - "**User Query:** {{ user_query }}"

3. Use the CLI

# Compile a prompt
pal compile prompts/classify_intent.pal --vars '{"user_query": "Take me to google.com", "available_intents": [{"name": "navigate", "description": "Go to URL"}]}'

# Execute with an LLM
pal execute prompts/classify_intent.pal --model gpt-4 --provider openai --vars '{"user_query": "Take me to google.com", "available_intents": [{"name": "navigate", "description": "Go to URL"}]}'

# Validate PAL files
pal validate prompts/ --recursive

# Run evaluation tests
pal evaluate evaluation/classify_intent.eval.yaml

4. Use Programmatically

import asyncio
from pal import PromptCompiler, PromptExecutor, MockLLMClient

async def main():
    # Set up components
    compiler = PromptCompiler()
    llm_client = MockLLMClient("Mock response")
    executor = PromptExecutor(llm_client)

    # Compile prompt
    variables = {
        "user_query": "What's the weather?",
        "available_intents": [{"name": "search", "description": "Search for info"}]
    }

    compiled_prompt = await compiler.compile_from_file(
        "prompts/classify_intent.pal",
        variables
    )

    print("Compiled Prompt:", compiled_prompt)

asyncio.run(main())

🧪 Evaluation System

Create test suites to validate your prompts:

# evaluation/classify_intent.eval.yaml
pal_version: "1.0"
prompt_id: "classify-user-intent"
target_version: "1.0.0"

test_cases:
  - name: "navigation_test"
    variables:
      user_query: "Go to google.com"
      available_intents: [{ "name": "navigate", "description": "Visit URL" }]
    assertions:
      - type: "json_valid"
      - type: "contains"
        config:
          text: "navigate"

🏗️ Architecture

PAL follows modern software engineering principles:

  • Schema Validation: All files are validated against strict Pydantic schemas
  • Dependency Resolution: Automatic import resolution with circular dependency detection
  • Template Engine: Jinja2 for powerful variable interpolation and logic
  • Observability: Structured logging with execution metrics and cost tracking
  • Type Safety: Full type hints and runtime validation

🛠️ CLI Commands

Command Description
pal compile Compile a PAL file into a prompt string
pal execute Compile and execute a prompt with an LLM
pal validate Validate PAL files for syntax and semantic errors
pal evaluate Run evaluation tests against prompts
pal info Show detailed information about PAL files

🧩 Component Types

PAL supports different types of reusable components:

  • persona: AI personality and role definitions
  • task: Specific instructions or objectives
  • context: Background information and knowledge
  • rules: Constraints and guidelines
  • examples: Few-shot learning examples
  • output_schema: Output format specifications
  • reasoning: Thinking strategies and methodologies
  • trait: Behavioral characteristics
  • note: Documentation and comments

🤝 Contributing

We welcome contributions! Please see our Contributing Guide for details.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🆘 Support

🗺️ Roadmap

  • PAL Registry: Centralized repository for sharing components
  • Visual Builder: Drag-and-drop prompt composition interface
  • IDE Extensions: VS Code and other editor integrations

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pal_framework-0.0.4.tar.gz (153.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pal_framework-0.0.4-py3-none-any.whl (32.4 kB view details)

Uploaded Python 3

File details

Details for the file pal_framework-0.0.4.tar.gz.

File metadata

  • Download URL: pal_framework-0.0.4.tar.gz
  • Upload date:
  • Size: 153.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for pal_framework-0.0.4.tar.gz
Algorithm Hash digest
SHA256 01057727ddcac4e87d3b2c378782a7efdfb3ddbcc8daefd8769feb469c4c77cd
MD5 95889413e9b49548536eff7b744921c0
BLAKE2b-256 72575fa76a940d5cbd06e83f9791b28b8ee16f7f948776bd8e2ad38c03f75ae8

See more details on using hashes here.

File details

Details for the file pal_framework-0.0.4-py3-none-any.whl.

File metadata

  • Download URL: pal_framework-0.0.4-py3-none-any.whl
  • Upload date:
  • Size: 32.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for pal_framework-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 d0a77b17e53d05c77257b64cb3a14d7a3bde268d4f77247321eb21e74c348f56
MD5 c332591fbf5973679ce18db549a2e819
BLAKE2b-256 7239b4a195b08c89a6945dddd539ae1ad1e3a67290bc2138e76711efefa30785

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page