A Python library to optimize prompt drafts using LLMs
Project description
🧠 leo-prompt-optimizer
leo-prompt-optimizer is a Python library that helps developers optimize raw LLM prompts into structured, high-performance instructions using real LLM intelligence.
It supports both Groq (LLaMA, Mixtral, etc.) and OpenAI (GPT-3.5, GPT-4) — making it fast, flexible, and production-ready.
🚀 Features
- 🛠️ Refines vague, messy, or unstructured prompts
- 🧠 Follows a 9-step prompt engineering framework
- 🧩 Supports contextual optimization (with user input & LLM output)
- 🔁 Works with both Groq and OpenAI
- ⚡ Blazing-fast open models via Groq
- 🔐 Secure API key management with
.envor helper functions - 🎛️ Lets users pick the model (
gpt-4,llama3-70b, etc.)
📦 Installation
pip install leo-prompt-optimizer
🔧 Setup: API Keys & Client
You can provide API keys either via .env or programmatically, then set the provider.
✅ Option A: Use .env (recommended)
Create a .env file in your project root:
GROQ_API_KEY=sk-your-groq-key
OPENAI_API_KEY=sk-your-openai-key
Then in Python:
from dotenv import load_dotenv
load_dotenv()
✅ Option B: Set manually in code
from leo_prompt_optimizer import set_groq_api_key, set_openai_api_key
set_groq_api_key("[YOUR GROQ KEY]")
set_openai_api_key("[YOUR OPENAI KEY]")
✅ Set your provider
Before calling optimize_prompt, set the backend provider:
from leo_prompt_optimizer import set_client
# For Groq (default)
set_client(provider="groq")
# For OpenAI (with optional custom base URL)
set_client(provider="openai", base_url="https://openrouter.ai/v1")
✍️ Usage Example with key in .env
from dotenv import load_dotenv
load_dotenv()
from leo_prompt_optimizer import (
optimize_prompt,
set_groq_api_key,
set_openai_api_key,
set_client
)
# Required: set backend provider
set_client(provider="groq") # or "openai"
# or if you have a specific base_url :
set_client(provider="openai", base_url="[YOUR BASE URL]")
# Prompt example
optimized = optimize_prompt(
prompt_draft="[YOUR PROMPT]",
user_input_example="[POTENTIAL INPUT EXAMPLE]", # Optional
llm_output_example="[POTENTIAL OUTPUT EXAMPLE]", # Optional
top_instruction="[POTENTIAL EXTRA CONTEXT, SPECIFIC INSTRUCTION TO FOCUS ON FOR THE LLM]", # Optional
model="[YOUR MODEL]", # Optional: model choie based on your provider(e.g. "gpt-4", "llama3-70b", etc.)
)
print(optimized)
🧠
user_input_exampleandllm_output_exampleare optional but improve accuracy. 💬top_instructionis optional — it adds specific guidance to influence the optimization (e.g., tone, audience, format).🎛️
modelis optional — defaults are:
"openai/gpt-oss-20b"for Groq"gpt-oss-120b"for OpenAI
📘 Output Format
Optimized prompts follow a clear, modular structure:
Role:
[Define the LLM's persona]
Task:
[Clearly state the specific objective]
Instructions:
* Step-by-step subtasks
Context:
[Any relevant background, constraints, domain]
Output Format:
[e.g., bullet list, JSON, summary]
User Input:
[Original user input or example]
🧪 Quick Test (Optional)
python3 test_import.py
Checks that:
- ✅ Import works
- ✅ API keys are found
- ✅ LLM returns a response
🧯 Common Errors & Fixes
| Error | Solution |
|---|---|
Missing API key |
Add key to .env or use set_groq_api_key() / set_openai_api_key() |
Client not set |
Use set_client("groq") or set_client("openai") before calling optimize_prompt() |
Invalid model or 403 |
Model may be deprecated or not available — try another, or check Groq’s model list |
ModuleNotFoundError |
Check if leo-prompt-optimizer is installed in the correct virtual environment |
💡 Why Use It?
Prompt quality is critical for any GenAI product.
leo-prompt-optimizer helps you:
✅ Make prompts explicit and modular 🚫 Reduce hallucinations 🔁 Improve consistency and reuse 🧱 Standardize prompt formats across your stack
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file leo_prompt_optimizer-0.2.1.tar.gz.
File metadata
- Download URL: leo_prompt_optimizer-0.2.1.tar.gz
- Upload date:
- Size: 9.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d9ad6aa9f4b3c12953907ca639e4d7a303577eb0b8736559901182bb9e7c3272
|
|
| MD5 |
cb3a4f5747f932d7e35752b2f5af5b4a
|
|
| BLAKE2b-256 |
5b422bfe22f75a692b67553ac551b160d3995f79428b1a727aefce66909145b6
|
File details
Details for the file leo_prompt_optimizer-0.2.1-py3-none-any.whl.
File metadata
- Download URL: leo_prompt_optimizer-0.2.1-py3-none-any.whl
- Upload date:
- Size: 10.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8d1389979fb3f417fc6eb707cc2c08af0146e222dd3c46f4daf19cc1f6d60298
|
|
| MD5 |
98140f895a6d84ac3a3bd19241e472c9
|
|
| BLAKE2b-256 |
3e03666462feb946d1a0a0bcf5b9da1ec04511f5f417ee93e771ceb047d01262
|