Skip to main content

A tool for migrating and optimizing prompts

Project description

Llama Prompt Ops

What is llama-prompt-ops?

Llama Model cards Llama Documentation Hugging Face meta-llama

Llama Tools Syntethic Data Kit Llama Tools Syntethic Data Kit Llama Cookbook

llama-prompt-ops is a Python package that automatically optimizes prompts for Llama models. It transforms prompts that work well with other LLMs into prompts that are optimized for Llama models, improving performance and reliability.

Key Benefits:

  • No More Trial and Error: Stop manually tweaking prompts to get better results
  • Fast Optimization: Get Llama-optimized prompts minutes with template-based optimization
  • Data-Driven Improvements: Use your own examples to create prompts that work for your specific use case
  • Measurable Results: Evaluate prompt performance with customizable metrics

Requirements

To get started with llama-prompt-ops, you'll need:

  • Existing System Prompt: Your existing system prompt that you want to optimize
  • Existing Query-Response Dataset: A JSON file containing query-response pairs (as few as 50 examples) for evaluation and optimization (see prepare your dataset below)
  • Configuration File: A YAML configuration file (config.yaml) specifying model hyperparameters, and optimization details (see example configuration)

How It Works

┌──────────────────────────┐  ┌──────────────────────────┐  ┌────────────────────┐    
│  Existing System Prompt  │  │  set(query, responses)   │  │ YAML Configuration │    
└────────────┬─────────────┘  └─────────────┬────────────┘  └───────────┬────────┘    
             │                              │                           │             
             │                              │                           │             
             ▼                              ▼                           ▼             
         ┌────────────────────────────────────────────────────────────────────┐
         │                     llama-prompt-ops migrate                       │
         └────────────────────────────────────────────────────────────────────┘
                                            │
                                            │
                                            ▼
                                ┌──────────────────────┐
                                │   Optimized Prompt   │
                                └──────────────────────┘

Simple Workflow

  1. Start with your existing system prompt: Take your existing system prompt that works with other LLMs (see example prompt)
  2. Prepare your dataset: Create a JSON file with query-response pairs for evaluation and optimization
  3. Configure optimization: Set up a simple YAML file with your dataset and preferences (see example configuration)
  4. Run optimization: Execute a single command to transform your prompt
  5. Get results: Receive a Llama-optimized prompt with performance metrics

Real-world Results

HotpotQA

HotpotQA Benchmark Results

These results were measured on the HotpotQA multi-hop reasoning benchmark, which tests a model's ability to answer complex questions requiring information from multiple sources. Our optimized prompts showed substantial improvements over baseline prompts across different model sizes.

Quick Start (5 minutes)

Step 1: Installation

# Create a virtual environment
conda create -n prompt-ops python=3.10
conda activate prompt-ops

# Install from PyPI
pip install llama-prompt-ops

# OR install from source
git clone https://github.com/meta-llama/llama-prompt-ops.git
cd llama-prompt-ops
pip install -e .

Step 2: Create a sample project

This will create a directory called my-project with a sample configuration and dataset in the current folder.

llama-prompt-ops create my-project
cd my-project

Step 3: Set Up Your API Key

Add your API key to the .env file:

OPENROUTER_API_KEY=your_key_here

You can get an OpenRouter API key by creating an account at OpenRouter. For more inference provider options, see Inference Providers.

Step 4: Run Optimization

The optimization will take about 5 minutes.

llama-prompt-ops migrate # defaults to config.yaml if --config not specified

Done! The optimized prompt will be saved to the results directory with performance metrics comparing the original and optimized versions.

To read more about this use case, we go into more detail in Basic Tutorial.

Prompt Transformation Example

Below is an example of a transformed system prompt from proprietary LM to Llama:

Original Proprietary LM Prompt Optimized Llama Prompt
You are a helpful assistant. Extract and return a JSON with the following keys and values:

1. "urgency": one of high, medium, low
2. "sentiment": one of negative, neutral, positive
3. "categories": Create a dictionary with categories as keys and boolean values (True/False), where the value indicates whether the category matches tags like emergency_repair_services, routine_maintenance_requests, etc.

Your complete message should be a valid JSON string that can be read directly.
You are an expert in analyzing customer service messages. Your task is to categorize the following message based on urgency, sentiment, and relevant categories.

Analyze the message and return a JSON object with these fields:

1. "urgency": Classify as "high", "medium", or "low" based on how quickly this needs attention
2. "sentiment": Classify as "negative", "neutral", or "positive" based on the customer's tone
3. "categories": Create a dictionary with facility management categories as keys and boolean values

Only include these exact keys in your response. Return a valid JSON object without code blocks, prefixes, or explanations.

Preparing Your Data

To use llama-prompt-ops for prompt optimization, you'll need to prepare a dataset with your prompts and expected responses. The standard format is a JSON file structured like this:

[
    {
        "question": "Your input query here",
        "answer": "Expected response here"
    },
    {
        "question": "Another input query",
        "answer": "Another expected response"
    }
]

If your data matches this format, you can use the built-in StandardJSONAdapter which will handle it automatically.

Custom Data Formats

If your data is formatted differently, and there isn't a built-in dataset adapter, you can create a custom dataset adapter by extending the DatasetAdapter class. See the Dataset Adapter Selection Guide for more details.

Multiple Inference Provider Support

llama-prompt-ops supports various inference providers and endpoints to fit your infrastructure needs. See our detailed guide on inference providers for configuration examples with:

  • OpenRouter (cloud-based API)
  • vLLM (local deployment)
  • NVIDIA NIMs (optimized containers)

Documentation and Examples

For more detailed information, check out these resources:

Acknowledgements

This project leverages some of awesome open source projects including DSPy, thanks to the team for the inspiring work!

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_prompt_ops-0.0.6.tar.gz (131.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_prompt_ops-0.0.6-py3-none-any.whl (141.7 kB view details)

Uploaded Python 3

File details

Details for the file llama_prompt_ops-0.0.6.tar.gz.

File metadata

  • Download URL: llama_prompt_ops-0.0.6.tar.gz
  • Upload date:
  • Size: 131.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.0

File hashes

Hashes for llama_prompt_ops-0.0.6.tar.gz
Algorithm Hash digest
SHA256 52aed8f383b8ed3e5ac6a69963ba9e15c4f9c85196532fc3cbd080c5a47677b2
MD5 bb5de9577d31cfb39442981c053ff56f
BLAKE2b-256 c768099b5ebddd43dbde8d99d30473c7230056874e08d78302f901a5f530b7ac

See more details on using hashes here.

File details

Details for the file llama_prompt_ops-0.0.6-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_prompt_ops-0.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 ee6fd297d8fbadba9c63c111fbbd41125ba8eb02918e299ee46db930e09c86c0
MD5 1d096a97f12d112fce2d8572c0f321c1
BLAKE2b-256 d8b5d7203cc7d6fa2974cb22cd501d85e23d75c602453ba775afb00660cce11f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page