A decorator that turns Python functions into LLM calls
Project description
smart_function
A Python decorator that converts any function into an LLM call — powered by its docstring and type signature.
How it works
@smart_func("gemma4")
def translate(text: str, target_language: str) -> str:
"""Translate the given text into the target language."""
When you call translate("Hello!", "French"), the decorator:
- Inspects the function signature and docstring
- Assembles a structured prompt describing the task and actual arguments
- Calls the configured LLM backend
- Parses the reply, expecting a
_llm_return = <value>assignment - Returns the evaluated Python literal
Requirements
- Python ≥ 3.10
- httpx (
pip install httpx) - A running Ollama instance (default) or an OpenAI-compatible API
Installation
pip install -e . # editable install
# or
uv pip install -e .
Quick start
Ollama (default, local)
from smart_function import smart_func
@smart_func("gemma4") # model name is the only required argument
def translate(text: str, target_language: str) -> str:
"""Translate the given text into the target language."""
print(translate("Hello, world!", "French"))
# → 'Bonjour, le monde !'
Sentiment analysis (returns a float)
@smart_func("gemma4")
def sentiment_score(text: str) -> float:
"""
Analyse the sentiment of the text.
Return a float between -1.0 (very negative) and 1.0 (very positive).
"""
print(sentiment_score("This is amazing!")) # → 0.95
Structured extraction (returns a dict)
@smart_func("gemma4")
def extract_person(bio: str) -> dict:
"""
Extract personal information from a biography.
Return a dict with keys: name, age, occupation. Use None for unknown fields.
"""
print(extract_person("Marie Curie was a 66-year-old physicist."))
# → {'name': 'Marie Curie', 'age': 66, 'occupation': 'physicist'}
OpenAI-compatible endpoint
@smart_func(
"gpt-4o-mini",
backend_type="openai",
api_key="sk-...",
base_url="https://api.openai.com/v1", # optional, this is the default
)
def summarize(article: str, max_words: int = 100) -> str:
"""Summarize the article in at most max_words words."""
Custom backend instance
from smart_function import smart_func
from smart_function.backends import OpenAIBackend
backend = OpenAIBackend(
model="gemma4",
base_url="http://localhost:1234/v1", # LM Studio, etc.
temperature=0.2,
)
@smart_func(backend=backend)
def classify(text: str, categories: list) -> str:
"""Classify the text into one of the given categories."""
Decorator parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
model |
str |
"llama3" |
LLM model name |
backend |
LLMBackend |
None |
Pre-built backend (overrides all other connection params) |
backend_type |
str |
"ollama" |
"ollama" or "openai" |
base_url |
str |
"http://localhost:11434" |
API base URL |
api_key |
str |
None |
API key (OpenAI-compatible) |
debug |
bool |
False |
Log prompt and raw response |
**backend_kwargs |
Forwarded to backend (e.g. temperature, timeout) |
Return value protocol
The LLM is instructed to produce only one line:
_llm_return = <Python literal>
The value is extracted with ast.literal_eval (safe, no arbitrary code execution).
Supported types: str, int, float, bool, None, list, dict, tuple.
Running the demo
# make sure Ollama is running and gemma4 is pulled
ollama pull gemma4
uv run examples/demo.py
Running tests
uv run pytest tests/ -v
Project layout
smart_function/
├── smart_function/
│ ├── __init__.py # public API
│ ├── decorator.py # @smart_func decorator
│ ├── prompt.py # prompt builder
│ ├── parser.py # response parser
│ └── backends.py # OllamaBackend, OpenAIBackend
├── examples/
│ └── demo.py
├── tests/
│ └── test_smart_function.py
└── pyproject.toml
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file brainy_deco-0.1.0.tar.gz.
File metadata
- Download URL: brainy_deco-0.1.0.tar.gz
- Upload date:
- Size: 17.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
49416620ce5b579bca2fa9746ebca86254f1c45b7d0c19511f97d00e10ac5e20
|
|
| MD5 |
8bc09d8659eeceb3edbe47e5bbd500df
|
|
| BLAKE2b-256 |
581bd6375dfbf00a5fc0945f2d79cf439c44c85430327c4e3f6a760d9f2a7b86
|
File details
Details for the file brainy_deco-0.1.0-py3-none-any.whl.
File metadata
- Download URL: brainy_deco-0.1.0-py3-none-any.whl
- Upload date:
- Size: 7.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0fd09a17c446dcbbcd533da46e14e2792274c2bf3da801359f7879652ce2f686
|
|
| MD5 |
8585c2c2557d06ec2111c7a505d461f8
|
|
| BLAKE2b-256 |
b36bb36ceb99261767fb03249659f00cd268de53118cc1f27bf74e4e38f8e4fb
|