A decorator that turns Python functions into LLM calls
Project description
smart_function / brainy_deco
A Python decorator that converts any function into an LLM call — powered by its docstring and type signature. Callable arguments are automatically exposed as tools the LLM can invoke during reasoning.
How it works
@smart_func("gemma4")
def translate(text: str, target_language: str) -> str:
"""Translate the given text into the target language."""
When you call translate("Hello!", "French"), the decorator:
- Inspects the function signature and docstring
- Detects any callable arguments and registers them as LLM-callable tools
- Assembles a structured prompt describing the task, arguments, and available tools
- Runs a conversation loop with the configured LLM backend:
- If the LLM emits
_llm_call = {"tool": "...", "args": [...]}→ calls the tool, feeds result back - If the LLM emits
_llm_return = <value>→ extracts and returns the Python literal
- If the LLM emits
- Returns the final result (parsed with
ast.literal_eval, no arbitrary code execution)
Requirements
- Python ≥ 3.10
- httpx (
pip install httpx) - A running Ollama instance (default) or an OpenAI-compatible API
Installation
pip install -e . # editable install
# or
uv pip install -e .
Quick start
Basic usage — Ollama (default, local)
from brainy_deco import smart_func
@smart_func("gemma4") # model name is the only required argument
def translate(text: str, target_language: str) -> str:
"""Translate the given text into the target language."""
print(translate("Hello, world!", "French"))
# → 'Bonjour, le monde !'
Sentiment analysis (returns a float)
@smart_func("gemma4")
def sentiment_score(text: str) -> float:
"""
Analyse the sentiment of the text.
Return a float between -1.0 (very negative) and 1.0 (very positive).
"""
print(sentiment_score("This is amazing!")) # → 0.95
Structured extraction (returns a dict)
@smart_func("gemma4")
def extract_person(bio: str) -> dict:
"""
Extract personal information from a biography.
Return a dict with keys: name, age, occupation. Use None for unknown fields.
"""
print(extract_person("Marie Curie was a 66-year-old physicist."))
# → {'name': 'Marie Curie', 'age': 66, 'occupation': 'physicist'}
Callable parameter as a tool (LLM-driven callback)
If any argument passed at call time is callable, it is automatically exposed to the LLM as a tool the LLM may invoke zero or more times.
from typing import Callable
@smart_func("gemma4")
def recommend_action(ticker: str, get_price: Callable) -> str:
"""
Fetch the current price of ticker using get_price, then recommend
one of 'buy', 'hold', or 'sell' with a one-sentence reason.
Return a string like: "hold – trading near fair value."
"""
def live_price(ticker: str) -> float:
"""Look up the real-time stock price for ticker."""
... # your actual implementation
print(recommend_action("AAPL", live_price))
# LLM calls live_price("AAPL"), receives 175.5, then returns the recommendation.
# → 'hold – the stock is trading around its recent average price.'
tools= dict — tools declared at decoration time
Use tools= to inject callables that are always available to the LLM,
independent of call-site arguments. Useful when the helpers are
implementation details that callers shouldn't need to know about.
def _km_to_miles(km: float) -> float:
"""Convert kilometres to miles."""
return km * 0.621371
def _miles_to_km(miles: float) -> float:
"""Convert miles to kilometres."""
return miles / 0.621371
@smart_func(
"gemma4",
tools={
"km_to_miles": _km_to_miles,
"miles_to_km": _miles_to_km,
},
)
def convert_units(value: float, from_unit: str, to_unit: str) -> float:
"""
Convert value from from_unit to to_unit (supported: km, miles).
Use the provided tools to perform the conversion.
"""
print(convert_units(100.0, "km", "miles")) # → 62.1371
Tools from tools= and auto-detected callable arguments are merged into a
single registry. On name collision, tools= entries take precedence.
OpenAI-compatible endpoint
@smart_func(
"gpt-4o-mini",
backend_type="openai",
api_key="sk-...",
base_url="https://api.openai.com/v1", # optional, this is the default
)
def summarize(article: str, max_words: int = 100) -> str:
"""Summarize the article in at most max_words words."""
Custom backend instance
from brainy_deco import smart_func
from brainy_deco.backends import OpenAIBackend
backend = OpenAIBackend(
model="gemma4",
base_url="http://localhost:1234/v1", # LM Studio, etc.
temperature=0.2,
)
@smart_func(backend=backend)
def classify(text: str, categories: list) -> str:
"""Classify the text into one of the given categories."""
Decorator parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
model |
str |
"gemma4" |
LLM model name |
backend |
LLMBackend |
None |
Pre-built backend (overrides all other connection params) |
backend_type |
str |
"ollama" |
"ollama" or "openai" |
base_url |
str |
"http://localhost:11434" |
API base URL |
api_key |
str |
None |
API key (OpenAI-compatible) |
tools |
dict[str, Callable] |
None |
Tools always available to the LLM, by name |
debug |
bool |
False |
Log prompt and raw LLM responses |
**backend_kwargs |
Forwarded to backend (e.g. temperature, timeout) |
LLM response protocol
The decorator uses a two-phase protocol:
Phase 1 — Tool call (zero or more times)
_llm_call = {"tool": "<tool_name>", "args": [<arg1>, <arg2>, ...]}
The callable is invoked with the given args and the result is fed back into the conversation as a new message. The loop continues until phase 2.
Phase 2 — Final return (exactly once)
_llm_return = <Python literal>
Extracted with ast.literal_eval (safe — no arbitrary code execution).
Supported types: str, int, float, bool, None, list, dict, tuple.
The loop is capped at 20 tool-call rounds to prevent runaway conversations.
Running the demo
# Make sure Ollama is running and the model is pulled
ollama pull gemma4
uv run examples/demo.py
Running tests
uv run pytest tests/ -v
Project layout
smart_function/
├── brainy_deco/
│ ├── __init__.py # public API
│ ├── decorator.py # @smart_func decorator + tool-calling loop
│ ├── prompt.py # prompt builder (system + user messages)
│ ├── parser.py # response parser (FinalResult / ToolCall)
│ └── backends.py # OllamaBackend, OpenAIBackend
├── examples/
│ └── demo.py # runnable examples (all 6 scenarios)
├── tests/
│ └── test_smart_function.py
└── pyproject.toml
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file brainy_deco-0.1.1.tar.gz.
File metadata
- Download URL: brainy_deco-0.1.1.tar.gz
- Upload date:
- Size: 21.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fe998582a3717a356155dee30a8940465821f5652a376e6e64dc7d584a0a79d7
|
|
| MD5 |
7d7d184c249732086420fc98bb22c188
|
|
| BLAKE2b-256 |
1731e69db3105ec69ed05b87f3148cc185c545d5b7ea3f42ae591ac492f3aae4
|
File details
Details for the file brainy_deco-0.1.1-py3-none-any.whl.
File metadata
- Download URL: brainy_deco-0.1.1-py3-none-any.whl
- Upload date:
- Size: 11.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
59c4f450eab28dfadd714bcba7a9e7223d48cfb9aa088e211a10b583d7b1aa01
|
|
| MD5 |
d3e9275691d266f6dc1dc2800e209f9d
|
|
| BLAKE2b-256 |
c8d012a0ea550bf6120fa6a10d65515aac7fa72484e6582bf24e9da3735402e5
|