Turn Python functions into typed LLM calls using docstrings as prompts
Project description
llm-markdown
LLM calls as Python functions. Write a docstring, add a type hint, done.
from llm_markdown import prompt
from llm_markdown.providers import OpenAIProvider
provider = OpenAIProvider(api_key="sk-...", model="gpt-4o-mini")
@prompt(provider)
def summarize(text: str) -> str:
"""Summarize this text in 2 sentences: {text}"""
result = summarize("Long article text here...")
# "The article discusses... In conclusion, ..."
How it works
Three rules:
- The docstring is the prompt. Use
{param}to interpolate function arguments. - The return type controls the output format.
-> strgives plain text.-> MyModelgives validated structured output. Imageparameters are attached as vision inputs automatically -- they don't appear in the docstring.
That's it. No configuration flags, no prompt templates, no output parsers. The function signature is the configuration.
Installation
pip install llm-markdown[openai]
Other extras: langfuse (observability), all (everything), test (pytest suite).
Structured output
Return a Pydantic model and the response is validated automatically:
from pydantic import BaseModel
class ReviewAnalysis(BaseModel):
sentiment: str
rating: float
key_points: list[str]
@prompt(provider)
def analyze_review(text: str) -> ReviewAnalysis:
"""Analyze this movie review:
- Overall sentiment (positive/negative/neutral)
- Rating on a scale of 1.0 to 5.0
- Key points
Review: {text}"""
result = analyze_review("A groundbreaking sci-fi film...")
result.sentiment # "positive"
result.rating # 4.5
result.key_points # ["groundbreaking visual effects", ...]
The library generates a JSON schema from the Pydantic model and uses the provider's native structured output (e.g. OpenAI's response_format). If the provider doesn't support it, it falls back to JSON prompting automatically.
List[...] and Dict[...] work the same way:
from typing import List
@prompt(provider)
def list_steps(task: str) -> List[str]:
"""List the steps to complete this task: {task}"""
list_steps("bake a cake")
# ["Preheat oven to 350F", "Mix dry ingredients", ...]
Images
Image parameters are detected by type and attached to the API call as vision inputs. The docstring is the text part of the prompt:
from llm_markdown import prompt, Image
@prompt(provider)
def answer_about_image(image: Image, question: str) -> str:
"""Answer this question about the image: {question}"""
answer_about_image(
image=Image("https://example.com/chart.png"),
question="What trend does this chart show?",
)
Image accepts URLs, base64 strings, or data URIs. Use List[Image] for multiple images.
Streaming
@prompt(provider, stream=True)
def tell_story(topic: str) -> str:
"""Tell a short story about {topic}."""
for chunk in tell_story("a robot learning to paint"):
print(chunk, end="", flush=True)
Async
Async functions work the same way:
@prompt(provider)
async def analyze(text: str) -> str:
"""Analyze: {text}"""
result = await analyze("some text")
Observability with Langfuse
Wrap any provider with LangfuseWrapper to log every call:
from llm_markdown.providers import OpenAIProvider, LangfuseWrapper
provider = LangfuseWrapper(
provider=OpenAIProvider(api_key="sk-..."),
secret_key="sk-lf-...",
public_key="pk-lf-...",
host="https://cloud.langfuse.com",
)
@prompt(
provider,
langfuse_metadata={"category": "reviews", "use_case": "sentiment"},
)
def analyze(text: str) -> str:
"""Analyze: {text}"""
Custom providers
Subclass LLMProvider to use any LLM backend:
from llm_markdown.providers import LLMProvider
class MyProvider(LLMProvider):
def complete(self, messages, **kwargs):
... # return response string
async def complete_async(self, messages, **kwargs):
... # return response string
# Optional -- enables native structured output.
# Without this, the decorator falls back to JSON prompting.
def complete_structured(self, messages, schema):
... # return parsed dict
OpenAIProvider handles all OpenAI model families (GPT-4o, GPT-5, o1/o3/o4) and auto-detects the correct token parameter.
Testing
pytest # unit tests (no API key)
OPENAI_API_KEY=sk-... pytest -m integration # real API tests
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_markdown-0.3.1.tar.gz.
File metadata
- Download URL: llm_markdown-0.3.1.tar.gz
- Upload date:
- Size: 22.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
aaaa26043620359e2d206964785a10d94805e8de634ce42e771a8dd8d551f41c
|
|
| MD5 |
1aa9bdae419af08e736b94abd12a3f79
|
|
| BLAKE2b-256 |
83da805f37979c0baa79c92922bd79b7a506ba78cbe7f2a07dfb17544ab3db59
|
File details
Details for the file llm_markdown-0.3.1-py3-none-any.whl.
File metadata
- Download URL: llm_markdown-0.3.1-py3-none-any.whl
- Upload date:
- Size: 24.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c03e8d190d8e039cbe1b0d1a43db4c443d5a81980adada676c41bbe26fbaa813
|
|
| MD5 |
49fba1c665e12f448987aa160f617fd5
|
|
| BLAKE2b-256 |
00c0b386a77ae43124f031c388034dd04ba1c106a0b5ceeb1050e8517dbc57c6
|