Dotpromptz is a language-neutral executable prompt template file format for Generative AI.
Project description
Dotprompt: Executable GenAI Prompt Templates
Dotprompt is an executable prompt template file format for Generative AI. It is designed to be agnostic to programming language and model provider to allow for maximum flexibility in usage. Dotprompt extends the popular Handlebars templating language with GenAI-specific features.
What's an executable prompt template?
An executable prompt template is a file that contains not only the text of a prompt but also metadata and instructions for how to use that prompt with a generative AI model. Here's what makes Dotprompt files executable:
-
Metadata Inclusion: Dotprompt files include metadata about model configuration, input requirements, and expected output format. This information is typically stored in a YAML frontmatter section at the beginning of the file.
-
Self-Contained Entity: Because a Dotprompt file contains all the necessary information to execute a prompt, it can be treated as a self-contained entity. This means you can "run" a Dotprompt file directly, without needing additional configuration or setup in your code.
-
Model Configuration: The file specifies which model to use and how to configure it (e.g., temperature, max tokens).
-
Input Schema: It defines the structure of the input data expected by the prompt, allowing for validation and type-checking.
-
Output Format: The file can specify the expected format of the model's output, which can be used for parsing and validation.
-
Templating: The prompt text itself uses Handlebars syntax, allowing for dynamic content insertion based on input variables.
This combination of features makes it possible to treat a Dotprompt file as an executable unit, streamlining the process of working with AI models and ensuring consistency across different uses of the same prompt.
Example .prompt file
Here's an example of a Dotprompt file that extracts structured data from provided text:
---
adapter: google
config:
model: gemini-2.5-pro
input:
schema:
text: string
output:
format: json
schema:
name?: string, the full name of the person
age?: number, the age of the person
occupation?: string, the person's occupation
---
Extract the requested information from the given text. If a piece of information is not
present, omit that field from the output. Text:
{{text}}
This Dotprompt file:
- Selects the
googleadapter and specifies thegemini-2.5-promodel viaconfig.model. - Defines an input schema expecting a
textstring. - Specifies that the output should be in JSON format.
- Provides a schema for the expected output, including fields for name, age, and occupation.
- Uses Handlebars syntax (
{{text}}) to insert the input text into the prompt.
Note: The top-level
modelfield (e.g.model: googleai/gemini-2.5-pro) is deprecated. Useadapterfor adapter/provider selection andconfig.modelfor the LLM model name.
When executed, this prompt would take a text input, analyze it using the specified AI model, and return a structured JSON object with the extracted information.
LLM Adapters & CLI (Fork Extension)
This fork adds LLM API adapters and a command-line tool so you can run .prompt files directly against OpenAI, Anthropic, Google Gemini, or any third-party compatible service (all adapters support custom base_url).
Quick Start (CLI)
# Install (all adapters included by default)
uv add "dotpromptz-py"
# Set API key
export OPENAI_API_KEY="sk-..."
# Run a prompt (single mode - adapter auto-inferred from config.model)
runprompt my_prompt.prompt input.json
# Dry-run (render only, no API call)
runprompt my_prompt.prompt input.json --dry-run
# Batch mode (auto-detected from list input)
runprompt my_prompt.prompt batch_inputs.jsonl
Adapter and model are configured in the .prompt file frontmatter:
---
adapter: openai # optional — auto-inferred from config.model name
config:
model: gpt-4o
runtime:
max_workers: 10 # Concurrent workers for batch processing
output_dir: ./results # Optional directory for output files
jsonl: true # Output in JSONL format
input:
schema:
topic: string
---
Tell me about {{topic}}.
For third-party compatible services (e.g. DeepSeek, Ollama):
---
adapter:
name: openai
base_url: https://api.deepseek.com
config:
model: deepseek-chat
---
Tell me about {{topic}}.
Quick Start (Python)
import asyncio
from dotpromptz import Dotprompt
from dotpromptz.typing import DataArgument
from dotpromptz.adapters import get_adapter
async def main():
dp = Dotprompt()
rendered = await dp.render(source, data=DataArgument(input={"topic": "AI"}))
adapter = get_adapter("openai")
response = await adapter.generate(rendered)
print(response.text)
asyncio.run(main())
Supported Adapters
| Adapter | Env Var |
|---|---|
| OpenAI | OPENAI_API_KEY |
| Anthropic | ANTHROPIC_API_KEY |
| Google Gemini | GOOGLE_API_KEY |
All adapters and their SDK dependencies (openai, anthropic, google-genai) are included as core dependencies — no extras needed.
All adapters support base_url for third-party compatible endpoints (e.g. DeepSeek, vLLM, Ollama). Configure via frontmatter adapter.base_url or env vars (OPENAI_BASE_URL / ANTHROPIC_BASE_URL / GOOGLE_BASE_URL).
Image Generation (Google Gemini)
Dotprompt supports native image generation via Gemini's generateContent API with response_modalities=["IMAGE"]. To use it, set output.format: image and output.save_path in the frontmatter.
Text-to-image example (draw_cat.prompt):
---
adapter: google
config:
model: gemini-2.0-flash-exp
output:
format: image
save_path: output/cat.png
input:
schema:
style: string
---
Draw a {{style}} cat sitting on a windowsill.
runprompt draw_cat.prompt input.json
# → Image saved to: output/cat.png
Image-to-image example (using {{media}} helper for input):
---
adapter: google
config:
model: gemini-2.0-flash-exp
output:
format: image
save_path: output/edited.png
input:
schema:
image_url: string
instruction: string
---
{{media url=image_url}}
{{instruction}}
runprompt edit_image.prompt input.json
Notes:
output.save_pathis required whenformatisimage. Omitting it raises a validation error at parse time.save_pathis validated against path traversal attacks (e.g.../../etc/passwd) — only paths within the current working directory are allowed.- Currently only the Google adapter supports image generation. Only the first generated image is saved.
- The parent directory of
save_pathis created automatically if it does not exist.
详细使用教程: 请阅读 USAGE.md
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dotpromptz_py-1.2.0.tar.gz.
File metadata
- Download URL: dotpromptz_py-1.2.0.tar.gz
- Upload date:
- Size: 175.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
df38a6cd91795d184abe0e978bc0619ab4da2885f8e87008c179db84ca9eda90
|
|
| MD5 |
e69c198629c1e90d8d026584762beac8
|
|
| BLAKE2b-256 |
59a6a46c36186e810ee48ff1492512dacecb243d4c515d91242dff54bc77bd61
|
Provenance
The following attestation bundles were made for dotpromptz_py-1.2.0.tar.gz:
Publisher:
publish-pypi.yml on my-three-kingdoms/dotpromptz
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
dotpromptz_py-1.2.0.tar.gz -
Subject digest:
df38a6cd91795d184abe0e978bc0619ab4da2885f8e87008c179db84ca9eda90 - Sigstore transparency entry: 1004685738
- Sigstore integration time:
-
Permalink:
my-three-kingdoms/dotpromptz@5f2e6d93d56f1fef53b5eed34c52c12d06f9baef -
Branch / Tag:
refs/heads/main - Owner: https://github.com/my-three-kingdoms
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@5f2e6d93d56f1fef53b5eed34c52c12d06f9baef -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file dotpromptz_py-1.2.0-py3-none-any.whl.
File metadata
- Download URL: dotpromptz_py-1.2.0-py3-none-any.whl
- Upload date:
- Size: 78.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9afe063eb0ea3feb5004851987ff4ccd2d186c96c4577da301a7c8e37edeb551
|
|
| MD5 |
098e50e212e93330dd31cc4889423a4e
|
|
| BLAKE2b-256 |
68d50c62f5babea5866d6c1a65bc67916d30dc6e58b24f5754b7ac917769803e
|
Provenance
The following attestation bundles were made for dotpromptz_py-1.2.0-py3-none-any.whl:
Publisher:
publish-pypi.yml on my-three-kingdoms/dotpromptz
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
dotpromptz_py-1.2.0-py3-none-any.whl -
Subject digest:
9afe063eb0ea3feb5004851987ff4ccd2d186c96c4577da301a7c8e37edeb551 - Sigstore transparency entry: 1004685746
- Sigstore integration time:
-
Permalink:
my-three-kingdoms/dotpromptz@5f2e6d93d56f1fef53b5eed34c52c12d06f9baef -
Branch / Tag:
refs/heads/main - Owner: https://github.com/my-three-kingdoms
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@5f2e6d93d56f1fef53b5eed34c52c12d06f9baef -
Trigger Event:
workflow_dispatch
-
Statement type: