Skip to main content

A decorator that turns Python functions into LLM calls

Project description

smart_function / brainy_deco

A Python decorator that converts any function into an LLM call — powered by its docstring and type signature. Callable arguments are automatically exposed as tools the LLM can invoke during reasoning.

How it works

@smart_func("gemma4")
def translate(text: str, target_language: str) -> str:
    """Translate the given text into the target language."""

When you call translate("Hello!", "French"), the decorator:

  1. Inspects the function signature and docstring
  2. Detects any callable arguments and registers them as LLM-callable tools
  3. Assembles a structured prompt describing the task, arguments, and available tools
  4. Runs a conversation loop with the configured LLM backend:
    • If the LLM emits _llm_call = {"tool": "...", "args": [...]} → calls the tool, feeds result back
    • If the LLM emits _llm_return = <value> → extracts and returns the Python literal
  5. Returns the final result (parsed with ast.literal_eval, no arbitrary code execution)

Requirements

  • Python ≥ 3.10
  • httpx (pip install httpx)
  • A running Ollama instance (default) or an OpenAI-compatible API

Installation

pip install -e .          # editable install
# or
uv pip install -e .

Quick start

Basic usage — Ollama (default, local)

from brainy_deco import smart_func

@smart_func("gemma4")          # model name is the only required argument
def translate(text: str, target_language: str) -> str:
    """Translate the given text into the target language."""

print(translate("Hello, world!", "French"))
# → 'Bonjour, le monde !'

Sentiment analysis (returns a float)

@smart_func("gemma4")
def sentiment_score(text: str) -> float:
    """
    Analyse the sentiment of the text.
    Return a float between -1.0 (very negative) and 1.0 (very positive).
    """

print(sentiment_score("This is amazing!"))   # → 0.95

Structured extraction (returns a dict)

@smart_func("gemma4")
def extract_person(bio: str) -> dict:
    """
    Extract personal information from a biography.
    Return a dict with keys: name, age, occupation. Use None for unknown fields.
    """

print(extract_person("Marie Curie was a 66-year-old physicist."))
# → {'name': 'Marie Curie', 'age': 66, 'occupation': 'physicist'}

Callable parameter as a tool (LLM-driven callback)

If any argument passed at call time is callable, it is automatically exposed to the LLM as a tool the LLM may invoke zero or more times.

from typing import Callable

@smart_func("gemma4")
def recommend_action(ticker: str, get_price: Callable) -> str:
    """
    Fetch the current price of ticker using get_price, then recommend
    one of 'buy', 'hold', or 'sell' with a one-sentence reason.
    Return a string like: "hold – trading near fair value."
    """

def live_price(ticker: str) -> float:
    """Look up the real-time stock price for ticker."""
    ...   # your actual implementation

print(recommend_action("AAPL", live_price))
# LLM calls live_price("AAPL"), receives 175.5, then returns the recommendation.
# → 'hold – the stock is trading around its recent average price.'

tools= dict — tools declared at decoration time

Use tools= to inject callables that are always available to the LLM, independent of call-site arguments. Useful when the helpers are implementation details that callers shouldn't need to know about.

def _km_to_miles(km: float) -> float:
    """Convert kilometres to miles."""
    return km * 0.621371

def _miles_to_km(miles: float) -> float:
    """Convert miles to kilometres."""
    return miles / 0.621371

@smart_func(
    "gemma4",
    tools={
        "km_to_miles": _km_to_miles,
        "miles_to_km": _miles_to_km,
    },
)
def convert_units(value: float, from_unit: str, to_unit: str) -> float:
    """
    Convert value from from_unit to to_unit (supported: km, miles).
    Use the provided tools to perform the conversion.
    """

print(convert_units(100.0, "km", "miles"))   # → 62.1371

Tools from tools= and auto-detected callable arguments are merged into a single registry. On name collision, tools= entries take precedence.

OpenAI-compatible endpoint

@smart_func(
    "gpt-4o-mini",
    backend_type="openai",
    api_key="sk-...",
    base_url="https://api.openai.com/v1",   # optional, this is the default
)
def summarize(article: str, max_words: int = 100) -> str:
    """Summarize the article in at most max_words words."""

Multimodal inputs (Image / Audio / Video)

If you pass a Media object (or use the Image, Audio, Video helpers), the decorator automatically encodes the binary data and sends it to the LLM alongside the textual prompt.

from brainy_deco import smart_func, Image

@smart_func("gemma4:e2b")  # Use a multimodal/vision-capable model
def describe_image(img: Image) -> str:
    """Describe the content of the image in detail."""

print(describe_image(Image(path="photo.jpg")))

Custom backend instance

from brainy_deco import smart_func
from brainy_deco.backends import OpenAIBackend

backend = OpenAIBackend(
    model="gemma4",
    base_url="http://localhost:1234/v1",   # LM Studio, etc.
    temperature=0.2,
)

@smart_func(backend=backend)
def classify(text: str, categories: list) -> str:
    """Classify the text into one of the given categories."""

Context Memory (Multi-turn conversations & Cross-function sharing)

To enable the LLM to remember past interactions, pass context="name" as a keyword argument when calling the decorated function.

History is stored in memory and is global — any decorated function using the same context name can access the shared conversation history.

from brainy_deco import smart_func

@smart_func("gemma4")
def chat(message: str) -> str:
    """Respond to the user's message conversationally."""

@smart_func("gemma4")
def summarize() -> str:
    """Summarize the conversation so far."""

# Multi-turn continuity
chat("My name is Alice and I like cats.", context="session1")
print(chat("What is my name?", context="session1"))  # The LLM remembers Alice

# Cross-function sharing
print(summarize(context="session1"))  # Reads the same history

Managing Context History:

You can persist, load, or reset context using methods automatically attached to the decorated function:

# Save to JSON
chat.save_context("session1", "history.json")

# Clear the context memory
chat.reset_context("session1")

# Load from JSON
chat.load_context("session1", "history.json")

The maximum number of history rounds is capped automatically. You can change this using max_history=50 when decorating the function.

Decorator parameters

Parameter Type Default Description
model str "gemma4" LLM model name
backend LLMBackend None Pre-built backend (overrides all other connection params)
backend_type str "ollama" "ollama" or "openai"
base_url str "http://localhost:11434" API base URL
api_key str None API key (OpenAI-compatible)
tools dict[str, Callable] None Tools always available to the LLM, by name
debug bool False Log prompt and raw LLM responses
**backend_kwargs Forwarded to backend (e.g. temperature, timeout)

LLM response protocol

The decorator uses a two-phase protocol:

Phase 1 — Tool call (zero or more times)

_llm_call = {"tool": "<tool_name>", "args": [<arg1>, <arg2>, ...]}

The callable is invoked with the given args and the result is fed back into the conversation as a new message. The loop continues until phase 2.

Phase 2 — Final return (exactly once)

_llm_return = <Python literal>

Extracted with ast.literal_eval (safe — no arbitrary code execution).
Supported types: str, int, float, bool, None, list, dict, tuple.

The loop is capped at 20 tool-call rounds to prevent runaway conversations.

Running the demo

# Make sure Ollama is running and the model is pulled
ollama pull gemma4
uv run examples/demo.py
uv run examples/context.py     # context memory & multi-turn example

Running tests

uv run pytest tests/ -v

Project layout

smart_function/
├── brainy_deco/
│   ├── __init__.py      # public API
│   ├── decorator.py     # @smart_func decorator + tool-calling loop
│   ├── prompt.py        # prompt builder (system + user messages)
│   ├── parser.py        # response parser (FinalResult / ToolCall)
│   ├── context.py       # context memory store for multi-turn runs
│   ├── media.py         # multimodal types (Media, Image, Audio, Video)
│   └── backends.py      # OllamaBackend, OpenAIBackend
├── examples/
│   ├── demo.py          # runnable examples
│   ├── context.py       # context memory examples
│   └── moe.py           # multimodal example
├── tests/
│   └── test_smart_function.py
└── pyproject.toml

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

brainy_deco-0.1.4.tar.gz (35.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

brainy_deco-0.1.4-py3-none-any.whl (19.9 kB view details)

Uploaded Python 3

File details

Details for the file brainy_deco-0.1.4.tar.gz.

File metadata

  • Download URL: brainy_deco-0.1.4.tar.gz
  • Upload date:
  • Size: 35.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for brainy_deco-0.1.4.tar.gz
Algorithm Hash digest
SHA256 bfd268fe2c9deec4ceb4e7b09cc1409d03c8038e21e92acb201a9b8db9c5800c
MD5 c2759ddc396bd4006d495be6d20554dd
BLAKE2b-256 29c6bf87cc8dc20f402f82d50bf5795e1e599a3bdb3c0099c89fe981d415b0d2

See more details on using hashes here.

File details

Details for the file brainy_deco-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: brainy_deco-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 19.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for brainy_deco-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 e5fd2e55a34dfafac489635874dcd8422da2cf13f82e3a49372fa3e9042d85fa
MD5 79271b1b45459d56d246face267b7b2e
BLAKE2b-256 53cefaf63d9c15bc157702dcc57159e098439937b12ddb2cbdccc098a20997c0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page