Skip to main content

A decorator that turns Python functions into LLM calls

Project description

smart_function / brainy_deco

A Python decorator that converts any function into an LLM call — powered by its docstring and type signature. Callable arguments are automatically exposed as tools the LLM can invoke during reasoning.

How it works

@smart_func("gemma4")
def translate(text: str, target_language: str) -> str:
    """Translate the given text into the target language."""

When you call translate("Hello!", "French"), the decorator:

  1. Inspects the function signature and docstring
  2. Detects any callable arguments and registers them as LLM-callable tools
  3. Assembles a structured prompt describing the task, arguments, and available tools
  4. Runs a conversation loop with the configured LLM backend:
    • If the LLM emits _llm_call = {"tool": "...", "args": [...]} → calls the tool, feeds result back
    • If the LLM emits _llm_return = <value> → extracts and returns the Python literal
  5. Returns the final result (parsed with ast.literal_eval, no arbitrary code execution)

Requirements

  • Python ≥ 3.10
  • httpx (pip install httpx)
  • A running Ollama instance (default) or an OpenAI-compatible API

Installation

pip install -e .          # editable install
# or
uv pip install -e .

Quick start

Basic usage — Ollama (default, local)

from brainy_deco import smart_func

@smart_func("gemma4")          # model name is the only required argument
def translate(text: str, target_language: str) -> str:
    """Translate the given text into the target language."""

print(translate("Hello, world!", "French"))
# → 'Bonjour, le monde !'

Sentiment analysis (returns a float)

@smart_func("gemma4")
def sentiment_score(text: str) -> float:
    """
    Analyse the sentiment of the text.
    Return a float between -1.0 (very negative) and 1.0 (very positive).
    """

print(sentiment_score("This is amazing!"))   # → 0.95

Structured extraction (returns a dict)

@smart_func("gemma4")
def extract_person(bio: str) -> dict:
    """
    Extract personal information from a biography.
    Return a dict with keys: name, age, occupation. Use None for unknown fields.
    """

print(extract_person("Marie Curie was a 66-year-old physicist."))
# → {'name': 'Marie Curie', 'age': 66, 'occupation': 'physicist'}

Callable parameter as a tool (LLM-driven callback)

If any argument passed at call time is callable, it is automatically exposed to the LLM as a tool the LLM may invoke zero or more times.

from typing import Callable

@smart_func("gemma4")
def recommend_action(ticker: str, get_price: Callable) -> str:
    """
    Fetch the current price of ticker using get_price, then recommend
    one of 'buy', 'hold', or 'sell' with a one-sentence reason.
    Return a string like: "hold – trading near fair value."
    """

def live_price(ticker: str) -> float:
    """Look up the real-time stock price for ticker."""
    ...   # your actual implementation

print(recommend_action("AAPL", live_price))
# LLM calls live_price("AAPL"), receives 175.5, then returns the recommendation.
# → 'hold – the stock is trading around its recent average price.'

tools= dict — tools declared at decoration time

Use tools= to inject callables that are always available to the LLM, independent of call-site arguments. Useful when the helpers are implementation details that callers shouldn't need to know about.

def _km_to_miles(km: float) -> float:
    """Convert kilometres to miles."""
    return km * 0.621371

def _miles_to_km(miles: float) -> float:
    """Convert miles to kilometres."""
    return miles / 0.621371

@smart_func(
    "gemma4",
    tools={
        "km_to_miles": _km_to_miles,
        "miles_to_km": _miles_to_km,
    },
)
def convert_units(value: float, from_unit: str, to_unit: str) -> float:
    """
    Convert value from from_unit to to_unit (supported: km, miles).
    Use the provided tools to perform the conversion.
    """

print(convert_units(100.0, "km", "miles"))   # → 62.1371

Tools from tools= and auto-detected callable arguments are merged into a single registry. On name collision, tools= entries take precedence.

OpenAI-compatible endpoint

@smart_func(
    "gpt-4o-mini",
    backend_type="openai",
    api_key="sk-...",
    base_url="https://api.openai.com/v1",   # optional, this is the default
)
def summarize(article: str, max_words: int = 100) -> str:
    """Summarize the article in at most max_words words."""

Custom backend instance

from brainy_deco import smart_func
from brainy_deco.backends import OpenAIBackend

backend = OpenAIBackend(
    model="gemma4",
    base_url="http://localhost:1234/v1",   # LM Studio, etc.
    temperature=0.2,
)

@smart_func(backend=backend)
def classify(text: str, categories: list) -> str:
    """Classify the text into one of the given categories."""

Decorator parameters

Parameter Type Default Description
model str "gemma4" LLM model name
backend LLMBackend None Pre-built backend (overrides all other connection params)
backend_type str "ollama" "ollama" or "openai"
base_url str "http://localhost:11434" API base URL
api_key str None API key (OpenAI-compatible)
tools dict[str, Callable] None Tools always available to the LLM, by name
debug bool False Log prompt and raw LLM responses
**backend_kwargs Forwarded to backend (e.g. temperature, timeout)

LLM response protocol

The decorator uses a two-phase protocol:

Phase 1 — Tool call (zero or more times)

_llm_call = {"tool": "<tool_name>", "args": [<arg1>, <arg2>, ...]}

The callable is invoked with the given args and the result is fed back into the conversation as a new message. The loop continues until phase 2.

Phase 2 — Final return (exactly once)

_llm_return = <Python literal>

Extracted with ast.literal_eval (safe — no arbitrary code execution).
Supported types: str, int, float, bool, None, list, dict, tuple.

The loop is capped at 20 tool-call rounds to prevent runaway conversations.

Running the demo

# Make sure Ollama is running and the model is pulled
ollama pull gemma4
uv run examples/demo.py

Running tests

uv run pytest tests/ -v

Project layout

smart_function/
├── brainy_deco/
│   ├── __init__.py      # public API
│   ├── decorator.py     # @smart_func decorator + tool-calling loop
│   ├── prompt.py        # prompt builder (system + user messages)
│   ├── parser.py        # response parser (FinalResult / ToolCall)
│   └── backends.py      # OllamaBackend, OpenAIBackend
├── examples/
│   └── demo.py          # runnable examples (all 6 scenarios)
├── tests/
│   └── test_smart_function.py
└── pyproject.toml

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

brainy_deco-0.1.2.tar.gz (38.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

brainy_deco-0.1.2-py3-none-any.whl (15.5 kB view details)

Uploaded Python 3

File details

Details for the file brainy_deco-0.1.2.tar.gz.

File metadata

  • Download URL: brainy_deco-0.1.2.tar.gz
  • Upload date:
  • Size: 38.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for brainy_deco-0.1.2.tar.gz
Algorithm Hash digest
SHA256 a52fbef65840bb7927a0f90b5a4590486e1b77184425a75a99087d1dd311c8c7
MD5 8bbccf2139092b7b5d23b329bc89bc79
BLAKE2b-256 593491146b4cbafadead8a2cd724e54a19bbe651d5afd99c74bfc99d1f392230

See more details on using hashes here.

File details

Details for the file brainy_deco-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: brainy_deco-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 15.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for brainy_deco-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 feaac16e2f5ae494cb81321e17b414ba552a6f413c3a080f0ab1156365f144ba
MD5 e62c98b0c77794678a907f2fedfdfbd5
BLAKE2b-256 b15a27c6b37e57e74e61baec23de8933bf5d326f625951b2967b0f2ea55ec839

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page