A modern logging SDK for multi-modal data
Project description
Nebo
Nebo is a modern logging SDK for multi-modal data. Decorate your functions with @nb.fn() and call nb.log() to write logs; nebo automatically infers a DAG from your call graph.
Why Nebo?
Nebo offers function-level logging capturing metrics, images, audio, and text at the granularity of individual functions, so you can monitor inputs, outputs, and execution flow of your code. Global logs, or logs not bound to a particular function, are also supported. This enables observability for applications such as:
- Agentic workflows with multimodal data
- DAG-structured data-processing pipelines
- ML training + inference
Features
- Captured log types: text, metrics, images, audio, progress
- Automatically infers a DAG from your call graph
- CLI, MCP and agent skill for AI agent query support
- MCP write tools so external agents can push metrics, images, audio, and text into a run
- Fully self-contained log file per run
- Rich terminal UI
- Mobile-first web UI
- Notebook embedding via
nb.show()(Jupyter-renderable iframe of any slice of a run) - One-command deploy to a Hugging Face Space (
nebo deploy) with public/private read+write modes
Nebo is in active development and features will roll out according to its core principles.
Installation
pip install nebo
The CLI entry point is nebo:
nebo --help
Quick Start
import nebo as nb
@nb.fn()
def load_data(path: str = "data.csv") -> list[dict]:
"""Load records from a file."""
records = [{"id": i, "value": i * 0.5} for i in range(100)]
nb.log(f"Loaded {len(records)} records from {path}")
return records
@nb.fn()
def transform(records: list[dict]) -> list[dict]:
"""Normalize values."""
out = []
for r in nb.track(records, name="transforming"):
out.append({**r, "value": r["value"] / 50.0})
nb.log(f"Transformed {len(out)} records")
nb.log_line("record_count", float(len(out)))
return out
def run():
"""Main pipeline entry point."""
records = load_data()
result = transform(records)
return result
if __name__ == "__main__":
run()
Running this produces a Rich terminal display showing the DAG, node execution counts, logs, and progress bars. The DAG edges (run -> load_data, load_data -> transform) are inferred automatically from data flow -- no manual wiring required.
Core Concepts
@nb.fn() -- Register a function as a DAG node
Every function decorated with @nb.fn() becomes a node in the pipeline DAG. Edges are inferred from data flow: when a node's return value is passed as an argument to another node, an edge is created from the producer to the consumer.
@nb.fn()
def load_data():
return [1, 2, 3]
@nb.fn()
def transform(data):
return [x * 2 for x in data]
def run():
records = load_data() # edge: run -> load_data (no data dependency)
result = transform(records) # edge: load_data -> transform (data flows from load_data)
return result
When a child node receives no node-produced arguments, the edge falls back to the calling parent node.
You can use it in several ways:
@nb.fn # bare decorator
@nb.fn() # with parentheses
@nb.fn(depends_on=[other_fn]) # with explicit dependencies
@nb.fn(ui={"collapsed": True}) # with per-node UI hints
Class Decoration
@nb.fn() can be applied to classes. All methods are wrapped with scope tracking, and the class name becomes a visual group in the DAG:
@nb.fn()
class Agent:
def think(self, query):
nb.log(f"Thinking about: {query}")
return {"plan": "respond"}
def act(self, plan):
nb.log(f"Acting on: {plan}")
return "result"
agent = Agent()
agent.think("hello")
agent.act({"plan": "respond"})
Methods appear as Agent.think and Agent.act in the DAG, grouped under Agent.
Automatic Materialization
Decorated functions appear in the DAG as soon as they execute for the first time — a call to nb.log(), nb.log_line(), etc. is not required. This keeps dependency chains intact when an intermediate function only orchestrates calls to other nodes without logging anything itself.
depends_on -- Explicit dependency declaration
Some dependencies cannot be detected automatically (shared mutable state, class attributes, global variables). Use depends_on to declare these explicitly:
@nb.fn()
def setup():
"""Initialize shared resources."""
...
@nb.fn(depends_on=[setup])
def process():
"""Uses resources initialized by setup."""
...
nb.log(message) -- Text logging
Log a message to the current node. Messages appear in the terminal dashboard and are queryable via MCP tools.
@nb.fn()
def train(data):
nb.log(f"Training on {len(data)} samples")
for epoch in range(10):
loss = do_train(data)
nb.log(f"Epoch {epoch}: loss={loss:.4f}")
Typed metric helpers — nb.log_line / log_bar / log_pie / log_scatter / log_histogram
One function per chart type. The chart type locks on first emission per (loggable, name) pair — reusing a name with a different log_* function raises ValueError.
log_line is the only chart type that accumulates over time (every call appends another step). The other four are snapshots — re-emitting the same name overwrites the prior value, and they don't take step or tags kwargs.
@nb.fn()
def train(model, data):
# Line — accumulates; takes step / tags
for epoch in range(100):
loss = train_one_epoch(model, data)
nb.log_line("loss", loss) # scalar
nb.log_line("lr", 3e-4, tags=["main"]) # tagged for UI filter
# Snapshots — overwrite on re-emission, no step / tags
nb.log_bar("counts", {"cat": 3, "dog": 5}) # {label: number}
nb.log_pie("budget", {"prompt": 800, "completion": 200}) # {label: number}
nb.log_scatter("embed_2d", { # {label: list[(x, y)]}
"inliers": [(0.1, 0.2), (0.3, 0.4)],
"outliers": [(2.0, -1.0)],
})
nb.log_histogram( # {label: list[number]}
"latencies",
{"p50": [...], "p95": [...], "p99": [...]},
colors=True, # palette per label
)
log_scatter and log_histogram accept colors: bool = False. With colors=True the UI distinguishes labels using the shared palette (in addition to per-label shapes for scatter); not recommended in comparison views, where the palette is reserved for run identity.
nb.log_cfg(cfg) -- Configuration logging
Log configuration for the current node.
@nb.fn()
def train(lr=0.001, epochs=50):
nb.log_cfg({"lr": lr, "epochs": epochs})
...
nb.track(iterable, name=None, total=None) -- Progress tracking
Wrap any iterable for tqdm-like progress tracking.
@nb.fn()
def process(items):
for item in nb.track(items, name="processing"):
transform(item)
nb.log_image(image, *, name=None, step=None, points=None, boxes=None, circles=None, polygons=None, bitmask=None) -- Image logging
Log images (PIL, NumPy arrays, or PyTorch tensors) for visual inspection, with optional geometric labels overlaid. Points are [x, y] (or a list of them); boxes are [x1, y1, x2, y2] in xyxy format; circles are [x, y, r]; polygons are [[x, y], ...]; bitmasks are 2D (HxW) or stacked (NxHxW). The UI's Settings pane > "Image labels" section exposes per-(loggable, image, key) visibility and opacity controls.
nb.log_audio(audio, sr=16000, name=None, step=None) -- Audio logging
Log audio data for playback and analysis.
nb.md(description) -- Workflow description
Set a workflow-level description (Markdown supported). Visible in MCP tools and the dashboard.
nb.md("A pipeline that loads images, runs inference, and exports predictions.")
nb.ui() -- Run-level UI defaults
Set default layout and display options for the web UI:
nb.ui(layout="horizontal", view="dag", minimap=True, theme="dark")
nb.ask(question, options=None, timeout=None) -- Human-in-the-loop
Pause the pipeline and ask the user a question via MCP or the terminal.
@nb.fn()
def review(predictions):
answer = nb.ask(
"Model accuracy is 73%. Continue training?",
options=["yes", "no", "retrain with more data"]
)
if answer == "no":
return predictions
...
CLI Reference
Start the daemon server
nebo serve # foreground
nebo serve -d # background (daemon mode)
nebo serve --port 3000 # custom port
nebo serve --no-store # disable .nebo file storage
nebo serve --store-dir /data # write .nebo files into /data
nebo serve --api-token nb_… # require a token on API requests
nebo serve --read public --write private # default access modes when token is set
Run a pipeline
nebo run my_pipeline.py
nebo run my_pipeline.py --name "experiment-1"
Load a .nebo file
# local daemon
nebo load .nebo/2026-04-06_143000_run-1.nebo
# remote daemon (e.g. an HF Space) — events read locally, replayed via /events
nebo load run.nebo --url https://user-space.hf.space --api-token nb_…
Deploy the daemon to a Hugging Face Space
pip install 'nebo[deploy]'
huggingface-cli login
# Public dashboard, private writes (defaults). Random token printed once.
nebo deploy --space-id <user>/nebo-test --from-source
# Fully private (read + write require token)
nebo deploy --space-id <user>/private-dash --read private --write private
After the Space builds, point the SDK at it:
import nebo as nb
nb.init(url="https://<user>-nebo-test.hf.space", api_token="nb_…")
# or set NEBO_URL / NEBO_API_TOKEN in the environment.
Check status, logs, errors
nebo status
nebo logs
nebo logs --run experiment-1 --node train --limit 50
nebo errors
nebo errors --run experiment-1
Stop the daemon
nebo stop
MCP integration
nebo mcp # print Claude Code MCP config
MCP Tools for AI Agents
Nebo exposes 21 MCP tools for querying, controlling, and writing data into pipelines from an AI agent (e.g., Claude). The daemon server must be running.
Observation Tools
| Tool | Description |
|---|---|
nebo_get_graph |
Full DAG structure: nodes, edges, execution counts |
nebo_get_loggable_status |
Detailed status for one loggable: logs, metrics, errors, params |
nebo_get_logs |
Recent log entries, filterable by loggable and run |
nebo_get_metrics |
Metric time series for a loggable |
nebo_get_errors |
All errors with full tracebacks and node context |
nebo_get_description |
Workflow description and all node docstrings |
Action Tools
| Tool | Description |
|---|---|
nebo_run_pipeline |
Start a pipeline script, returns a run ID |
nebo_stop_pipeline |
Stop a running pipeline by run ID |
nebo_restart_pipeline |
Stop and re-run a pipeline with same args |
nebo_get_run_status |
Status of a specific run (running/completed/crashed) plus metrics_index for one-call metric discovery |
nebo_get_run_history |
List all runs with outcomes and timestamps |
nebo_get_source_code |
Read a pipeline source file |
nebo_write_source_code |
Write or patch a pipeline source file |
nebo_ask_user |
Send a question to the user via the terminal |
nebo_wait_for_event |
Block until a pipeline event occurs or timeout elapses |
nebo_load_file |
Load a .nebo file into the daemon for viewing and Q&A |
nebo_chat |
Ask a question about a run (delegates to Claude Code CLI) |
Write Tools
These mirror the SDK's nb.log_* helpers so an external agent can push data into a run without owning the SDK process. Each accepts a single entry or a list. URL-based media is fetched server-side and persisted alongside the run.
| Tool | Description |
|---|---|
nebo_log_metric |
Push metric points (line / bar / pie / scatter / histogram) |
nebo_log_image |
Push images by url or data (base64) |
nebo_log_audio |
Push audio recordings by url or data, with optional sr |
nebo_log_text |
Push text log entries (defaults to the global loggable) |
.nebo File Format
Runs are persisted as .nebo binary files using MessagePack serialization. Each file contains a header (magic, version, metadata) followed by append-only event entries. Use nebo load to replay a file into the daemon.
Architecture
graph LR
A[Your Python Pipeline] --> B[Nebo SDK<br>@fn, log, track, ...]
B --> C[Daemon Server<br>FastAPI, port 7861]
B --> D[Terminal Dashboard<br>Rich]
C --> E[CLI<br>nebo]
C --> F[MCP Tools<br>Claude]
C --> G[Web UI]
Two execution modes:
- Local mode (default): In-process only. No daemon needed.
- Server mode: Events stream to a persistent daemon via HTTP. Use
nebo serveto start the daemon, thennebo runto execute pipelines.
The daemon can run on your laptop, in CI, or on a Hugging Face Space (nebo deploy). The same SDK code works against any of them — set NEBO_URL and NEBO_API_TOKEN to point at the target. When the daemon enforces auth, every API request must carry the token via the X-Nebo-Token header (HTTP) or the ?token=… query param (browsers / WebSocket).
API Reference
Module: nebo
| Function | Signature | Description |
|---|---|---|
fn |
@fn(), @fn(depends_on=[...]), @fn(ui={...}) |
Register a function/class as a DAG node |
log |
log(message: str) |
Log a text message |
log_line |
log_line(name, value, *, step=None, tags=None) |
Log a scalar line-chart datapoint |
log_bar |
log_bar(name, value) |
Bar-chart snapshot ({label: number}); overwrites |
log_pie |
log_pie(name, value) |
Pie-chart snapshot ({label: number}); overwrites |
log_scatter |
log_scatter(name, value, *, colors=False) |
Labeled scatter snapshot ({label: list[(x, y)]}); overwrites |
log_histogram |
log_histogram(name, value, *, colors=False) |
Labeled histogram snapshot ({label: list[number]}); overwrites |
log_cfg |
log_cfg(cfg: dict) |
Log node configuration |
log_image |
log_image(image, *, name=None, step=None, points=None, boxes=None, circles=None, polygons=None, bitmask=None) |
Log an image (optionally with geometric labels) |
log_audio |
log_audio(audio, sr=16000, name=None, step=None) |
Log audio data |
track |
track(iterable, name=None, total=None) |
Progress tracking |
md |
md(description: str) |
Set workflow description |
ui |
ui(layout, view, collapsed, minimap, theme) |
Set run-level UI defaults |
init |
init(port, host, mode, terminal, dag_strategy, flush_interval, store, url=None, api_token=None) |
Manual initialization. Pass url+api_token (or set NEBO_URL/NEBO_API_TOKEN env vars) to target a remote daemon |
ask |
ask(question, options=None, timeout=None) |
Human-in-the-loop prompt |
show |
show(*, run=None, node=None, metric=None, image=None, audio=None, logs=False, dag=False, width="100%", height=600) |
Jupyter-renderable iframe of a slice of a run |
get_state |
get_state() -> SessionState |
Access the global state singleton |
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file nebo-0.2.4.tar.gz.
File metadata
- Download URL: nebo-0.2.4.tar.gz
- Upload date:
- Size: 31.4 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a3e2f3c936fb53978bcdd2c6d2ac313c50e8257c09adb2df00d9c4e2ec002a9b
|
|
| MD5 |
94637756055066a9c5f75c5e4d6d9ef7
|
|
| BLAKE2b-256 |
a66e2151612ac61a2d55de7575ba232eaab8410bf4b54a4f4d00c7090664096f
|
Provenance
The following attestation bundles were made for nebo-0.2.4.tar.gz:
Publisher:
release.yml on graphbookai/nebo
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nebo-0.2.4.tar.gz -
Subject digest:
a3e2f3c936fb53978bcdd2c6d2ac313c50e8257c09adb2df00d9c4e2ec002a9b - Sigstore transparency entry: 1436142825
- Sigstore integration time:
-
Permalink:
graphbookai/nebo@6a5c9f72ec9d52d51ac9ea1a301edc0e329f6cc0 -
Branch / Tag:
refs/tags/v0.2.4 - Owner: https://github.com/graphbookai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@6a5c9f72ec9d52d51ac9ea1a301edc0e329f6cc0 -
Trigger Event:
push
-
Statement type:
File details
Details for the file nebo-0.2.4-py3-none-any.whl.
File metadata
- Download URL: nebo-0.2.4-py3-none-any.whl
- Upload date:
- Size: 718.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8d7f45885347a20428a44d1c017f93598a524bafa88e5fe371729f1111724de3
|
|
| MD5 |
c30856e9a7277b23759cc4174625d8e6
|
|
| BLAKE2b-256 |
ed4b844414b06536ba104d7ef155f746ffc452b7c838d961cf6d14afa0e8a691
|
Provenance
The following attestation bundles were made for nebo-0.2.4-py3-none-any.whl:
Publisher:
release.yml on graphbookai/nebo
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nebo-0.2.4-py3-none-any.whl -
Subject digest:
8d7f45885347a20428a44d1c017f93598a524bafa88e5fe371729f1111724de3 - Sigstore transparency entry: 1436142834
- Sigstore integration time:
-
Permalink:
graphbookai/nebo@6a5c9f72ec9d52d51ac9ea1a301edc0e329f6cc0 -
Branch / Tag:
refs/tags/v0.2.4 - Owner: https://github.com/graphbookai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@6a5c9f72ec9d52d51ac9ea1a301edc0e329f6cc0 -
Trigger Event:
push
-
Statement type: