Skip to main content

A Python library to optimize prompt drafts using LLMs

Project description

🧠 leo-prompt-optimizer

GitHub PyPI version PyPI Downloads

leo-prompt-optimizer is a production-grade library and CLI tool that transforms raw prompt drafts into structured, high-performance instructions using a 9-step engineering framework.

Stop "vibes-based" prompting. Use a data-driven approach to optimize, evaluate, and benchmark your prompts across OpenAI, Groq, Anthropic, and Gemini.


🌟 Key Features

  • Lightning Fast: Optimized for high-throughput providers like Groq for near-instant iteration.
  • 📊 LLM-as-a-Judge: Built-in G-Eval metrics, hallucination detection, and schema adherence checks.
  • 🖥️ Rich CLI: Beautiful terminal reports with side-by-side diffs and performance tables.
  • 🧩 XML-Structured Output: Automatically reformats prompts into <role>, <task>, and <instructions> blocks for better LLM steerability.

📦 Installation

pip install leo-prompt-optimizer

🖥️ CLI: The "Pro" Workflow

Optimize a prompt and immediately benchmark it against test cases to see if it actually performs better.

leo-prompt --prompt-file draft.txt \
           --provider-name groq \
           --tests tests.json \
           --model your-model-id

What happens under the hood?

  1. Optimization: Your draft is expanded into a structured "System Prompt."
  2. Execution: Both the Original and Optimized prompts run against your tests.json.
  3. Evaluation: A "Judge" model compares the outputs and generates a performance report.

🔧 Python API Usage

Perfect for integrating prompt optimization into your CI/CD pipelines or internal tools.

1. Initialize Provider

from leo_prompt_optimizer import GroqProvider, LeoOptimizer, PromptEvaluator

# Automatically loads API keys from .env (GROQ_API_KEY, OPENAI_API_KEY, etc.)
provider = GroqProvider()
optimizer = LeoOptimizer(provider, default_model="your-optimizer-model-id")

2. Optimize & Evaluate

draft = "Write a code review for this python function."

# 🚀 Step 1: Optimize
optimized = optimizer.optimize(draft)

# 📊 Step 2: Evaluate
evaluator = PromptEvaluator(provider, optimizer.env, judge_model="your-judge-model-id")
result = evaluator.evaluate(
    original_prompt=draft,
    optimized_prompt=optimized,
    test_input="def add(a, b): return a + b"
)

# The result object prints a beautiful ASCII dashboard automatically
print(result)

🧪 The Evaluation Framework

The library provides objective scores to replace subjective testing:

Metric Description
G-Eval (1-5) A multi-dimensional score for coherence and instruction following.
Token Efficiency Percentage of tokens saved (or added) for the structural improvement.
Schema Adherence Pass/Fail check for structured outputs (JSON/Markdown).
Hallucination Risk Detects if the model is fabricating facts not present in the input.

🤖 Supported Providers

Provider Environment Variable
Groq GROQ_API_KEY
OpenAI OPENAI_API_KEY
Anthropic ANTHROPIC_API_KEY
Gemini GOOGLE_API_KEY

📘 Optimized Format Example

Your raw drafts are transformed into high-signal instructions:

<role>You are a Senior Python Security Auditor...</role>
<task>Analyze the provided function for SQL injection vulnerabilities...</task>
<instructions>
1. Identify all string-formatting operations.
2. Check for missing parameterized queries...
</instructions>
<output-format>Return a JSON object with 'severity' and 'fix'.</output-format>

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

leo_prompt_optimizer-1.0.1.tar.gz (17.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

leo_prompt_optimizer-1.0.1-py3-none-any.whl (19.4 kB view details)

Uploaded Python 3

File details

Details for the file leo_prompt_optimizer-1.0.1.tar.gz.

File metadata

  • Download URL: leo_prompt_optimizer-1.0.1.tar.gz
  • Upload date:
  • Size: 17.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for leo_prompt_optimizer-1.0.1.tar.gz
Algorithm Hash digest
SHA256 13c4428329ea037f860d9197fe7ebf694db358039f85d1981c070f09664fab52
MD5 236492a8e26ca2bb9bdd157ae8d01bb0
BLAKE2b-256 14b7a07d03f3b47e067802e73aa5262e86d61aea6e42045fe814c1fb16a2d273

See more details on using hashes here.

File details

Details for the file leo_prompt_optimizer-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for leo_prompt_optimizer-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 77a697573b381df79c8c0d83667359a33d816b7d6e7bfa76f27b740964246127
MD5 d0c4845cf97fdeb90e0146abb3bffd1a
BLAKE2b-256 3d79589957c14d9cf5b938ba0d329e556fa14778090552725f5ebff1ab588c20

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page