Turn Python functions into typed LLM calls using docstrings as prompts
Project description
llm-markdown
Turn Python functions into typed LLM calls using docstrings as prompts.
Write a function, add a @prompt decorator, and the docstring becomes the LLM prompt. The return type annotation controls everything -- primitive types get plain text completion, Pydantic models get structured output with automatic JSON schema generation and validation.
Installation
pip install llm-markdown[all]
Or pick only what you need:
pip install llm-markdown # core only (pydantic + requests)
pip install llm-markdown[openai] # + OpenAI provider
pip install llm-markdown[langfuse] # + Langfuse observability
For local development:
pip install -e ".[all,test]"
Quick start
from llm_markdown import prompt
from llm_markdown.providers import OpenAIProvider
provider = OpenAIProvider(api_key="sk-...", model="gpt-4o-mini")
@prompt(provider)
def summarize(text: str) -> str:
"""Summarize this text in 2 sentences: {text}"""
result = summarize("Long article text here...")
print(result) # A plain string summary
The return type drives the behavior:
-> struses plain text completion-> MyPydanticModeluses structured output with JSON schema enforcement- No flags, no configuration beyond the type hint.
Structured output with Pydantic
Return a Pydantic model and the library handles JSON schema generation, structured output, and validation:
from pydantic import BaseModel
class ReviewAnalysis(BaseModel):
sentiment: str
rating: float
key_points: list[str]
@prompt(provider)
def analyze_review(text: str) -> ReviewAnalysis:
"""Analyze this movie review:
- Overall sentiment (positive/negative/neutral)
- Rating on a scale of 1.0 to 5.0
- Key points from the review
Review: {text}"""
result = analyze_review("A groundbreaking sci-fi film...")
print(result.sentiment) # "positive"
print(result.rating) # 4.5
print(result.key_points) # ["groundbreaking visual effects", ...]
If the provider supports native structured output (like OpenAI's response_format), it's used automatically. If not, the library falls back to JSON prompting and parses the response -- no errors, no extra configuration.
Returning generic types
List[...] and Dict[...] also trigger structured output automatically:
from typing import List
@prompt(provider)
def list_steps(task: str) -> List[str]:
"""List the steps to complete this task: {task}"""
steps = list_steps("bake a cake")
# ["Preheat oven", "Mix dry ingredients", ...]
Multimodal (images)
Use the Image type for vision tasks:
from llm_markdown import prompt, Image
@prompt(provider)
def describe(image: Image) -> str:
"""Describe this image in detail."""
result = describe(Image("https://example.com/photo.jpg"))
Image accepts URLs, base64 strings, or data URIs. Multiple images are supported via List[Image].
Streaming
@prompt(provider, stream=True)
def tell_story(topic: str) -> str:
"""Tell a short story about {topic}."""
for chunk in tell_story("a robot learning to paint"):
print(chunk, end="", flush=True)
Async support
All decorated functions can be async:
@prompt(provider)
async def analyze(text: str) -> str:
"""Analyze: {text}"""
result = await analyze("some text")
Langfuse integration
Wrap any provider with LangfuseWrapper for automatic logging and cost tracking:
from llm_markdown.providers import OpenAIProvider, LangfuseWrapper
provider = LangfuseWrapper(
provider=OpenAIProvider(api_key="sk-..."),
secret_key="sk-lf-...",
public_key="pk-lf-...",
host="https://cloud.langfuse.com",
)
@prompt(
provider,
langfuse_metadata={"category": "movie-reviews", "use_case": "sentiment-analysis"},
)
def analyze(text: str) -> str:
"""Analyze: {text}"""
Provider interface
OpenAIProvider auto-detects the correct token parameter for all model families (GPT-4o, GPT-5, o1/o3/o4 series). To add a custom provider, subclass LLMProvider:
from llm_markdown.providers import LLMProvider
class MyProvider(LLMProvider):
def complete(self, messages, **kwargs):
... # return response string
async def complete_async(self, messages, **kwargs):
... # return response string
# Optional: override for native structured output support.
# If not implemented, the decorator falls back to JSON prompting.
def complete_structured(self, messages, schema):
... # return parsed dict
Testing
Run the unit tests (no API key needed):
pip install -e ".[test]"
pytest
Run integration tests against the real OpenAI API:
OPENAI_API_KEY=sk-... pytest -m integration
Migration from v0.2.0
- The
reasoning_firstparameter has been removed. The decorator now automatically chooses between plain completion and structured output based on the return type annotation. - Pydantic models,
List[...], andDict[...]return types trigger structured output. Primitive types (str,int,float,bool) use plain completion. - The
{"reasoning": "...", "answer": ...}JSON envelope is gone. Structured output schemas now match the return type directly. - Providers that don't implement
complete_structured()now get a graceful fallback (JSON prompting viacomplete()) instead ofNotImplementedError. provideris still a keyword-only argument in@prompt(provider=...).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_markdown-0.3.0.tar.gz.
File metadata
- Download URL: llm_markdown-0.3.0.tar.gz
- Upload date:
- Size: 22.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5f8d95f6bfd48d811e02b34221979ee5214323b3b354bd1802119192edb978a3
|
|
| MD5 |
cac6253e484d201a57f8fbd4d499d1d4
|
|
| BLAKE2b-256 |
47ad377d0acc8eba5c3087b9cfad986222149a71e3d8363df7310ce803bfb94d
|
File details
Details for the file llm_markdown-0.3.0-py3-none-any.whl.
File metadata
- Download URL: llm_markdown-0.3.0-py3-none-any.whl
- Upload date:
- Size: 24.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e70cd215fe94bf24c9058e1cc187c6175f50e8f1791f8682610cc65b017a3455
|
|
| MD5 |
3324c0cd83078acb170a7478710e011b
|
|
| BLAKE2b-256 |
82fdefa63e9ed4312741405cb46f983849ce43971191490eb66c497cb03eec73
|