Skip to main content

Turn Python functions into typed LLM calls using docstrings as prompts

Project description

llm-markdown

LLM calls as Python functions. Write a docstring, add a type hint, done.

from llm_markdown import prompt
from llm_markdown.providers import OpenAIProvider

provider = OpenAIProvider(api_key="sk-...", model="gpt-4o-mini")

@prompt(provider)
def summarize(text: str) -> str:
    """Summarize this text in 2 sentences: {text}"""

result = summarize("Long article text here...")
# "The article discusses... In conclusion, ..."

How it works

Three rules:

  1. The docstring is the prompt. Use {param} to interpolate function arguments.
  2. The return type controls the output format. -> str gives plain text. -> MyModel gives validated structured output.
  3. Image parameters are attached as vision inputs automatically -- they don't appear in the docstring.

That's it. No configuration flags, no prompt templates, no output parsers. The function signature is the configuration.

Installation

pip install llm-markdown[openai]

Other extras: langfuse (observability), all (everything), test (pytest suite).

Structured output

Return a Pydantic model and the response is validated automatically:

from pydantic import BaseModel

class ReviewAnalysis(BaseModel):
    sentiment: str
    rating: float
    key_points: list[str]

@prompt(provider)
def analyze_review(text: str) -> ReviewAnalysis:
    """Analyze this movie review:
    - Overall sentiment (positive/negative/neutral)
    - Rating on a scale of 1.0 to 5.0
    - Key points

    Review: {text}"""

result = analyze_review("A groundbreaking sci-fi film...")
result.sentiment    # "positive"
result.rating       # 4.5
result.key_points   # ["groundbreaking visual effects", ...]

The library generates a JSON schema from the Pydantic model and uses the provider's native structured output (e.g. OpenAI's response_format). If the provider doesn't support it, it falls back to JSON prompting automatically.

List[...] and Dict[...] work the same way:

from typing import List

@prompt(provider)
def list_steps(task: str) -> List[str]:
    """List the steps to complete this task: {task}"""

list_steps("bake a cake")
# ["Preheat oven to 350F", "Mix dry ingredients", ...]

Images

Image parameters are detected by type and attached to the API call as vision inputs. The docstring is the text part of the prompt:

from llm_markdown import prompt, Image

@prompt(provider)
def answer_about_image(image: Image, question: str) -> str:
    """Answer this question about the image: {question}"""

answer_about_image(
    image=Image("https://example.com/chart.png"),
    question="What trend does this chart show?",
)

Image accepts URLs, base64 strings, or data URIs. Use List[Image] for multiple images.

Streaming

@prompt(provider, stream=True)
def tell_story(topic: str) -> str:
    """Tell a short story about {topic}."""

for chunk in tell_story("a robot learning to paint"):
    print(chunk, end="", flush=True)

Async

Async functions work the same way:

@prompt(provider)
async def analyze(text: str) -> str:
    """Analyze: {text}"""

result = await analyze("some text")

Observability with Langfuse

Wrap any provider with LangfuseWrapper to log every call:

from llm_markdown.providers import OpenAIProvider, LangfuseWrapper

provider = LangfuseWrapper(
    provider=OpenAIProvider(api_key="sk-..."),
    secret_key="sk-lf-...",
    public_key="pk-lf-...",
    host="https://cloud.langfuse.com",
)

@prompt(
    provider,
    langfuse_metadata={"category": "reviews", "use_case": "sentiment"},
)
def analyze(text: str) -> str:
    """Analyze: {text}"""

Custom providers

Subclass LLMProvider to use any LLM backend:

from llm_markdown.providers import LLMProvider

class MyProvider(LLMProvider):
    def complete(self, messages, **kwargs):
        ...  # return response string

    async def complete_async(self, messages, **kwargs):
        ...  # return response string

    # Optional -- enables native structured output.
    # Without this, the decorator falls back to JSON prompting.
    def complete_structured(self, messages, schema):
        ...  # return parsed dict

OpenAIProvider handles all OpenAI model families (GPT-4o, GPT-5, o1/o3/o4) and auto-detects the correct token parameter.

Testing

pytest                                          # unit tests (no API key)
OPENAI_API_KEY=sk-... pytest -m integration     # real API tests

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_markdown-0.3.1.tar.gz (22.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_markdown-0.3.1-py3-none-any.whl (24.5 kB view details)

Uploaded Python 3

File details

Details for the file llm_markdown-0.3.1.tar.gz.

File metadata

  • Download URL: llm_markdown-0.3.1.tar.gz
  • Upload date:
  • Size: 22.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for llm_markdown-0.3.1.tar.gz
Algorithm Hash digest
SHA256 aaaa26043620359e2d206964785a10d94805e8de634ce42e771a8dd8d551f41c
MD5 1aa9bdae419af08e736b94abd12a3f79
BLAKE2b-256 83da805f37979c0baa79c92922bd79b7a506ba78cbe7f2a07dfb17544ab3db59

See more details on using hashes here.

File details

Details for the file llm_markdown-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: llm_markdown-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 24.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for llm_markdown-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c03e8d190d8e039cbe1b0d1a43db4c443d5a81980adada676c41bbe26fbaa813
MD5 49fba1c665e12f448987aa160f617fd5
BLAKE2b-256 00c0b386a77ae43124f031c388034dd04ba1c106a0b5ceeb1050e8517dbc57c6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page