A Python library for managing and rendering prompt templates with type-safe inputs, versioning, and optional logging
Project description
Dakora
A Python library for managing and executing LLM prompts with type-safe inputs, versioning, and an interactive web playground. Execute templates against 100+ LLM providers with built-in cost tracking.
๐ Try it Now - No Installation Required
playground.dakora.io - Experience Dakora's interactive playground directly in your browser. Edit templates, test inputs, and see instant results with the exact same interface that ships with the Python package.
Use Case
from dakora import Vault, LocalRegistry
vault = Vault(LocalRegistry("./prompts"))
# Execute against any LLM provider
result = vault.get("summarizer").execute(
model="gpt-4",
input_text="Your article here..."
)
print(result.output) # The LLM's response
print(f"${result.cost_usd}") # Track costs automatically
Multiple ways to initialize:
# Direct registry injection (recommended)
vault = Vault(LocalRegistry("./prompts"))
# Azure Blob Storage
from dakora import AzureRegistry
vault = Vault(AzureRegistry(
container="prompts",
account_url="https://myaccount.blob.core.windows.net"
))
# Config file (for CLI tools)
vault = Vault.from_config("dakora.yaml")
# Legacy shorthand
vault = Vault(prompt_dir="./prompts")
Or from the command line:
dakora run summarizer --model gpt-4 --input-text "Article..."
Features
- ๐ Live Web Playground - Try online without installing anything!
- ๐ฏ Local Playground - Same modern React UI included with pip install
- ๐ LLM Execution - Run templates against 100+ LLM providers (OpenAI, Anthropic, Google, etc.)
- ๐จ Type-safe prompt templates with validation and coercion
- ๐ File-based template management with YAML definitions
- ๐ Hot-reload support for development
- ๐ Jinja2 templating with custom filters
- ๐ท๏ธ Semantic versioning for templates
- ๐ Optional execution logging to SQLite with cost tracking
- ๐ฅ๏ธ CLI interface for template management and execution
- ๐งต Thread-safe caching for production use
- ๐ฐ Cost & performance tracking - Monitor tokens, latency, and costs
Installation
pip install dakora
For the interactive playground:
- PyPI releases include a pre-built UI - just run
dakora playground - For development installs (git clone), Node.js 18+ is required
- The UI builds automatically from source on first run if not present
Or for development:
git clone https://github.com/bogdan-pistol/dakora.git
cd dakora
uv sync
source .venv/bin/activate
Quick Start
1. Initialize a project
dakora init
This creates:
dakora.yaml- Configuration fileprompts/- Directory for template filesprompts/summarizer.yaml- Example template
2. Create a template
Create prompts/greeting.yaml:
id: greeting
version: 1.0.0
description: A personalized greeting template
template: |
Hello {{ name }}!
{% if age %}You are {{ age }} years old.{% endif %}
{{ message | default("Have a great day!") }}
inputs:
name:
type: string
required: true
age:
type: number
required: false
message:
type: string
required: false
default: "Welcome to Dakora!"
3. Use in Python
from dakora import Vault
# Initialize vault
vault = Vault("dakora.yaml")
# Get and render template
template = vault.get("greeting")
result = template.render(name="Alice", age=25)
print(result)
# Output:
# Hello Alice!
# You are 25 years old.
# Welcome to Dakora!
4. Interactive Playground ๐ฏ
Try Online - No Installation Required
Visit playground.dakora.io to experience the playground instantly in your browser with example templates.
Or Run Locally
Launch the same web-based playground locally (included with pip install):
dakora playground
This automatically:
- ๐จ Builds the modern React UI (first run only)
- ๐ Starts the server at
http://localhost:3000 - ๐ Opens your browser to the playground
Features:
- โจ Identical experience online and locally
- ๐ฑ Mobile-friendly design that works on all screen sizes
- ๐จ Real-time template editing and preview
- ๐งช Test templates with different inputs
- ๐ Example templates for inspiration
- ๐ป Modern UI built with shadcn/ui components
Local Options:
dakora playground --port 8080 # Custom port
dakora playground --no-browser # Don't open browser
dakora playground --no-build # Skip UI build
dakora playground --demo # Run in demo mode (like the web version)
5. Execute Templates with LLMs
Dakora can execute templates against real LLM providers (OpenAI, Anthropic, Google, etc.) using the integrated LiteLLM support.
Setting API Keys
Get an API Key from your LLM provider, you will need an account:
Set your API Key as environment variables:
Linux/MacOS
export OPENAI_API_KEY="your_key_here"
export ANTHROPIC_API_KEY="your_key_here"
export GOOGLE_API_KEY="your_key_here"
Windows
setx OPENAI_API_KEY "your_api_key_here"
setx ANTHROPIC_API_KEY=your_key_here
setx GOOGLE_API_KEY=your_key_here
Cross-platform
Alternatively create a .env file in your project root:OPENAI_API_KEY="your_key_here"
GOOGLE_API_KEY="your_key_here"
ANTHROPIC_API_KEY="your-api-key-here"
Load and use in your Python code:
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
# Access variables using os.environ or os.getenv()
api_key = os.getenv("OPENAI_API_KEY")
#initialise the LLM Client
Warning
Nerver commit your API key to version control. Add .env to your .gitignore with:
echo ".env" >> .gitignore
Execute from Python
from dakora import Vault
vault = Vault("dakora.yaml")
template = vault.get("summarizer")
# Execute with gpt-4
result = template.execute(
model="gpt-4",
input_text="Your article content here..."
)
print(result.output)
print(f"Cost: ${result.cost_usd:.4f}")
print(f"Tokens: {result.tokens_in} โ {result.tokens_out}")
Execute from CLI
# Basic execution
dakora run summarizer --model gpt-4 --input-text "Article to summarize..."
# With LLM parameters
dakora run summarizer --model gpt-4 \
--input-text "Article..." \
--temperature 0.7 \
--max-tokens 100
# JSON output for scripting
dakora run summarizer --model gpt-4 \
--input-text "Article..." \
--json
# Quiet mode (only LLM response)
dakora run summarizer --model gpt-4 \
--input-text "Article..." \
--quiet
Example Output:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ Model: gpt-4 (openai) โ
โ Cost: $0.0045 USD โ
โ Latency: 1,234 ms โ
โ Tokens: 150 โ 80 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
The article discusses the recent advances in...
Compare Multiple Models
Compare the same prompt across different models to find the best one for your use case.
Prerequisites: You need API keys for each provider you want to compare. See the Setting API Keys section above for setup instructions.
From Python:
from dakora import Vault
vault = Vault("dakora.yaml")
template = vault.get("summarizer")
# Compare across multiple models in parallel
# Note: You need OPENAI_API_KEY, ANTHROPIC_API_KEY, and GOOGLE_API_KEY set
comparison = template.compare(
models=["gpt-4", "claude-3-opus", "gemini-pro"],
input_text="Your article content here...",
temperature=0.7
)
# View aggregate stats
print(f"Total Cost: ${comparison.total_cost_usd:.4f}")
print(f"Successful: {comparison.successful_count}/{len(comparison.results)}")
print(f"Total Tokens: {comparison.total_tokens_in} โ {comparison.total_tokens_out}")
# Compare individual results
for result in comparison.results:
if result.error:
print(f"โ {result.model}: {result.error}")
else:
print(f"โ
{result.model} (${result.cost_usd:.4f}, {result.latency_ms}ms)")
print(f" {result.output[:100]}...")
Example Output:
Total Cost: $0.0890
Successful: 3/3
Total Tokens: 450 โ 180
โ
gpt-4 ($0.0450, 1234ms)
The article discusses recent advances in artificial intelligence and their impact on...
โ
claude-3-opus ($0.0320, 987ms)
Recent AI developments have transformed multiple industries. The article examines...
โ
gemini-pro ($0.0120, 1567ms)
This piece explores cutting-edge AI technologies and analyzes their effects across...
Key Features:
- โก Parallel execution - All models run simultaneously for speed
- ๐ช Handles failures gracefully - One model failing doesn't stop others (e.g., missing API key)
- ๐ Rich comparison data - Costs, tokens, latency for each model
- ๐ Order preserved - Results match input model order
- ๐ All executions logged - Each execution tracked separately
Why Compare Models?
- Find the most cost-effective model for your use case
- Test quality differences between providers
- Evaluate latency trade-offs
- Build fallback strategies for production
Supported Models
Dakora supports 100+ LLM providers through LiteLLM:
- OpenAI:
gpt-4,gpt-4-turbo,gpt-5-nano,gpt-3.5-turbo - Anthropic:
claude-3-opus,claude-3-sonnet,claude-3-haiku - Google:
gemini-pro,gemini-1.5-pro - Local:
ollama/llama3,ollama/mistral - And many more...
See LiteLLM docs for the full list.
6. CLI Usage
# List all templates
dakora list
# Get template content
dakora get greeting
# Execute a template
dakora run summarizer --model gpt-4 --input-text "..."
# Bump version
dakora bump greeting --minor
# Watch for changes
dakora watch
Verify API Keys
Check which API keys are configured:
# Check all providers
dakora config
# Check specific provider
dakora config --provider openai
Template Format
Templates are defined in YAML files with the following structure:
id: unique_template_id # Required: Template identifier
version: 1.0.0 # Required: Semantic version
description: Template purpose # Optional: Human-readable description
template: | # Required: Jinja2 template string
Your template content here
{{ variable_name }}
inputs: # Optional: Input specifications
variable_name:
type: string # string|number|boolean|array<string>|object
required: true # Default: true
default: "default value" # Optional: Default value
metadata: # Optional: Custom metadata
tags: ["tag1", "tag2"]
author: "Your Name"
Supported Input Types
string- Text valuesnumber- Numeric values (int/float)boolean- True/false valuesarray<string>- List of stringsobject- Dictionary/JSON object
Built-in Jinja2 Filters
default(value)- Provide fallback for empty valuesyaml- Convert objects to YAML format
Configuration
Local Storage (Default)
dakora.yaml structure for local file storage:
registry: local # Registry type
prompt_dir: ./prompts # Path to templates directory
logging: # Optional: Execution logging
enabled: true
backend: sqlite
db_path: ./dakora.db
Azure Blob Storage
For cloud-based template storage with Azure Blob Storage:
Install Azure dependencies:
pip install dakora[azure]
Python usage:
from dakora import Vault, AzureRegistry
# Option 1: Direct initialization with DefaultAzureCredential
vault = Vault(AzureRegistry(
container="prompts",
account_url="https://myaccount.blob.core.windows.net"
# Uses DefaultAzureCredential (Azure CLI, Managed Identity, etc.)
))
# Option 2: With connection string
vault = Vault(AzureRegistry(
container="prompts",
connection_string="DefaultEndpointsProtocol=https;AccountName=..."
))
# Option 3: From config file
vault = Vault.from_config("dakora.yaml")
# Use normally - same API as local storage
template = vault.get("greeting")
result = template.render(name="Alice")
Configuration file (dakora.yaml):
registry: azure
azure_container: prompts # Azure Blob container name
azure_account_url: https://myaccount.blob.core.windows.net
# Optional: Connection string (alternative to account_url)
# azure_connection_string: "DefaultEndpointsProtocol=https;..."
# Optional: Custom prefix for blob paths
# azure_prefix: prompts/
logging: # Optional: Same as local
enabled: true
backend: sqlite
db_path: ./dakora.db
Authentication:
AzureRegistry supports multiple authentication methods:
-
DefaultAzureCredential (Recommended) - Automatically tries multiple methods:
- Azure CLI (
az login) - Managed Identity (when running on Azure)
- Environment variables
- Visual Studio Code
- And more...
- Azure CLI (
-
Connection String - Direct connection string with account key:
vault = Vault(AzureRegistry( container="prompts", connection_string=os.environ["AZURE_STORAGE_CONNECTION_STRING"] ))
Environment Variables:
# For DefaultAzureCredential (recommended)
az login # Authenticate via Azure CLI
# Or use connection string
export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=..."
Features:
- โ Same API as local storage - just swap the registry
- โ Thread-safe with caching
- โ List, load, and save templates to Azure Blob Storage
- โ Works with all Dakora features (CLI, playground, execution)
- โ Secure authentication via Azure credentials
CLI Usage:
# All CLI commands work with Azure registry
dakora list # Lists templates from Azure
dakora get greeting # Loads from Azure Blob Storage
dakora run summarizer --model gpt-4 --input-text "..."
# Playground works too
dakora playground # Uses Azure registry from config
Advanced Usage
FastAPI + OpenAI Integration
Dakora works great with web APIs. Here's a FastAPI example using OpenAI's latest Responses API and GPT-5:
from fastapi import FastAPI
from dakora import Vault
from openai import OpenAI
app = FastAPI()
vault = Vault("dakora.yaml")
client = OpenAI()
@app.post("/chat")
async def chat_endpoint(message: str, template_id: str):
template = vault.get(template_id)
# Use template's run method with new Responses API
result = template.run(
lambda prompt: client.responses.create(
model="gpt-5",
reasoning={"effort": "medium"},
input=prompt
).output_text,
message=message
)
return {"response": result}
Examples
Multi-Agent Research Assistant
examples/openai-agents/ - Build intelligent research agents with the OpenAI Agents Framework, using Dakora to manage complex multi-agent prompts with type-safe inputs and hot-reload during development.
FastAPI Integration
See examples/fastapi/ for a complete FastAPI application with multiple endpoints, reasoning controls, and error handling.
With Logging
from dakora import Vault
vault = Vault("dakora.yaml")
template = vault.get("my_template")
# Log execution automatically
result = template.run(
lambda prompt: call_your_llm(prompt),
input_text="Hello world"
)
Direct Vault Creation
from dakora import Vault
# Skip config file, use prompt directory directly
vault = Vault(prompt_dir="./my_prompts")
Hot Reload in Development
from dakora import Vault
from dakora.watcher import Watcher
vault = Vault("dakora.yaml")
watcher = Watcher("./prompts", on_change=vault.invalidate_cache)
watcher.start()
# Templates will reload automatically when files change
Development
Setup
git clone https://github.com/bogdan-pistol/dakora.git
cd dakora
uv sync
source .venv/bin/activate
Running Tests
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=dakora
# Run smoke tests
uv run python tests/smoke_test.py
Code Quality
# Format code
uv run ruff format
# Lint code
uv run ruff check
# Type checking
uv run mypy dakora
Development Commands
See CLAUDE.md for detailed development guidance.
Contributing
We welcome contributions! Join our community:
- ๐ฌ Discord - Join our Discord server for discussions and support
- ๐ Issues - Report bugs or request features
- ๐ Pull Requests - Submit improvements
Development Setup
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Make your changes and add tests
- Run the test suite:
uv run pytest - Submit a pull request
License
This project is licensed under the Apache-2.0 License - see the LICENSE file for details.
Changelog
See CHANGELOG.md for version history.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dakora-1.0.3.tar.gz.
File metadata
- Download URL: dakora-1.0.3.tar.gz
- Upload date:
- Size: 2.2 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ce276e3dbaf049ee71f4053a727db1d0ea3a331b2000825dd646af7633d61811
|
|
| MD5 |
29988f9ffa2de432fe3e0ec2bac25da8
|
|
| BLAKE2b-256 |
3e58d8362ae8b268c7ea35c992233621bc80e44e89337fe2d1502fe49faf0109
|
File details
Details for the file dakora-1.0.3-py3-none-any.whl.
File metadata
- Download URL: dakora-1.0.3-py3-none-any.whl
- Upload date:
- Size: 49.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2c214da2e87fcaa06995b36d02a3a4e398e5bd9dab3458366e8c50d9b9877c5b
|
|
| MD5 |
a2d617029251536af769b9b7b267e3bb
|
|
| BLAKE2b-256 |
31b7fcae6f78bfaf93be0564d6aab7c08b70764d7a8d63b61a06a68cb24f078f
|