Skip to main content

A modern logging SDK for multi-modal data

Project description

Nebo

Nebo is a modern logging SDK for multi-modal data. Decorate your functions with @nb.fn() and call nb.log() to write logs; nebo automatically infers a DAG from your call graph.

Why Nebo?

Nebo offers function-level logging capturing metrics, images, audio, and text at the granularity of individual functions, so you can monitor inputs, outputs, and execution flow of your code. Global logs, or logs not bound to a particular function, are also supported. This enables observability for applications such as:

  • Agentic workflows with multimodal data
  • DAG-structured data-processing pipelines
  • ML training + inference

Features

  • Captured log types: text, metrics, images, audio, progress
  • Automatically infers a DAG from your call graph
  • CLI, MCP and agent skill for AI agent query support
  • MCP write tools so external agents can push metrics, images, audio, and text into a run
  • Fully self-contained log file per run
  • Rich terminal UI
  • Mobile-first web UI
  • Notebook embedding via nb.show() (Jupyter-renderable iframe of any slice of a run)
  • One-command deploy to a Hugging Face Space (nebo deploy) with public/private read+write modes

Nebo is in active development and features will roll out according to its core principles.

Installation

pip install nebo

The CLI entry point is nebo:

nebo --help

Quick Start

import nebo as nb

@nb.fn()
def load_data(path: str = "data.csv") -> list[dict]:
    """Load records from a file."""
    records = [{"id": i, "value": i * 0.5} for i in range(100)]
    nb.log(f"Loaded {len(records)} records from {path}")
    return records

@nb.fn()
def transform(records: list[dict]) -> list[dict]:
    """Normalize values."""
    out = []
    for r in nb.track(records, name="transforming"):
        out.append({**r, "value": r["value"] / 50.0})
    nb.log(f"Transformed {len(out)} records")
    nb.log_line("record_count", float(len(out)))
    return out

def run():
    """Main pipeline entry point."""
    records = load_data()
    result = transform(records)
    return result

if __name__ == "__main__":
    run()

Running this produces a Rich terminal display showing the DAG, node execution counts, logs, and progress bars. The DAG edges (run -> load_data, load_data -> transform) are inferred automatically from data flow -- no manual wiring required.

Core Concepts

@nb.fn() -- Register a function as a DAG node

Every function decorated with @nb.fn() becomes a node in the pipeline DAG. Edges are inferred from data flow: when a node's return value is passed as an argument to another node, an edge is created from the producer to the consumer.

@nb.fn()
def load_data():
    return [1, 2, 3]

@nb.fn()
def transform(data):
    return [x * 2 for x in data]

def run():
    records = load_data()        # edge: run -> load_data (no data dependency)
    result = transform(records)  # edge: load_data -> transform (data flows from load_data)
    return result

When a child node receives no node-produced arguments, the edge falls back to the calling parent node.

You can use it in several ways:

@nb.fn              # bare decorator
@nb.fn()            # with parentheses
@nb.fn(depends_on=[other_fn])  # with explicit dependencies
@nb.fn(ui={"collapsed": True})  # with per-node UI hints

Class Decoration

@nb.fn() can be applied to classes. All methods are wrapped with scope tracking, and the class name becomes a visual group in the DAG:

@nb.fn()
class Agent:
    def think(self, query):
        nb.log(f"Thinking about: {query}")
        return {"plan": "respond"}

    def act(self, plan):
        nb.log(f"Acting on: {plan}")
        return "result"

agent = Agent()
agent.think("hello")
agent.act({"plan": "respond"})

Methods appear as Agent.think and Agent.act in the DAG, grouped under Agent.

Automatic Materialization

Decorated functions appear in the DAG as soon as they execute for the first time — a call to nb.log(), nb.log_line(), etc. is not required. This keeps dependency chains intact when an intermediate function only orchestrates calls to other nodes without logging anything itself.

depends_on -- Explicit dependency declaration

Some dependencies cannot be detected automatically (shared mutable state, class attributes, global variables). Use depends_on to declare these explicitly:

@nb.fn()
def setup():
    """Initialize shared resources."""
    ...

@nb.fn(depends_on=[setup])
def process():
    """Uses resources initialized by setup."""
    ...

nb.log(message) -- Text logging

Log a message to the current node. Messages appear in the terminal dashboard and are queryable via MCP tools.

@nb.fn()
def train(data):
    nb.log(f"Training on {len(data)} samples")
    for epoch in range(10):
        loss = do_train(data)
        nb.log(f"Epoch {epoch}: loss={loss:.4f}")

Typed metric helpers — nb.log_line / log_bar / log_pie / log_scatter / log_histogram

One function per chart type. The chart type locks on first emission per (loggable, name) pair — reusing a name with a different log_* function raises ValueError.

log_line and log_scatter accumulate over time — every call appends another emission with an auto-incrementing step. log_bar / log_pie / log_histogram are snapshots — re-emitting the same name overwrites the prior value, and they don't take step or tags kwargs.

@nb.fn()
def train(model, data):
    # Line — accumulates; takes step / tags
    for epoch in range(100):
        loss = train_one_epoch(model, data)
        nb.log_line("loss", loss)                                  # scalar
        nb.log_line("lr", 3e-4, tags=["main"])                     # tagged for UI filter

    # Scatter — accumulates too; one or more {label: [(x, y), ...]} per call
    for i, (point, cluster) in enumerate(detections):
        nb.log_scatter("embed_2d", {cluster: [point]})             # step auto-advances

    # Snapshots — overwrite on re-emission, no step / tags
    nb.log_bar("counts", {"cat": 3, "dog": 5})                     # {label: number}
    nb.log_pie("budget", {"prompt": 800, "completion": 200})       # {label: number}
    nb.log_histogram(                                              # {label: list[number]}
        "latencies",
        {"p50": [...], "p95": [...], "p99": [...]},
        colors=True,                                               # palette per label
    )

log_scatter and log_histogram accept colors: bool = False. With colors=True the UI distinguishes labels using the shared palette (in addition to per-label shapes for scatter); not recommended in comparison views, where the palette is reserved for run identity.

Clicking any datapoint on a line or scatter chart in the web UI sets a global step filter — the timeline scrubber switches to Step mode, the active step is marked on every line/scatter chart, and the per-node logs/images/audio panels filter to entries whose step matches. Click the same point again or double-click the scrubber to clear.

nb.log_cfg(cfg) -- Configuration logging

Log configuration for the current node.

@nb.fn()
def train(lr=0.001, epochs=50):
    nb.log_cfg({"lr": lr, "epochs": epochs})
    ...

nb.track(iterable, name=None, total=None) -- Progress tracking

Wrap any iterable for tqdm-like progress tracking.

@nb.fn()
def process(items):
    for item in nb.track(items, name="processing"):
        transform(item)

nb.log_image(image, *, name=None, step=None, points=None, boxes=None, circles=None, polygons=None, bitmask=None) -- Image logging

Log images (PIL, NumPy arrays, or PyTorch tensors) for visual inspection, with optional geometric labels overlaid. Points are [x, y] (or a list of them); boxes are [x1, y1, x2, y2] in xyxy format; circles are [x, y, r]; polygons are [[x, y], ...]; bitmasks are 2D (HxW) or stacked (NxHxW). The UI's Settings pane > "Image labels" section exposes per-(loggable, image, key) visibility and opacity controls.

nb.log_audio(audio, sr=16000, name=None, step=None) -- Audio logging

Log audio data for playback and analysis.

nb.md(description) -- Workflow description

Set a workflow-level description (Markdown supported). Visible in MCP tools and the dashboard.

nb.md("A pipeline that loads images, runs inference, and exports predictions.")

nb.ui() -- Run-level UI defaults

Set default layout and display options for the web UI:

nb.ui(layout="horizontal", view="dag", minimap=True, theme="dark")

nb.ask(question, options=None, timeout=None) -- Human-in-the-loop

Pause the pipeline and ask the user a question via MCP or the terminal.

@nb.fn()
def review(predictions):
    answer = nb.ask(
        "Model accuracy is 73%. Continue training?",
        options=["yes", "no", "retrain with more data"]
    )
    if answer == "no":
        return predictions
    ...

CLI Reference

Start the daemon server

nebo serve                              # foreground
nebo serve -d                           # background (daemon mode)
nebo serve --port 3000                  # custom port
nebo serve --no-store                   # disable .nebo file storage
nebo serve --store-dir /data            # write .nebo files into /data
nebo serve --api-token nb_…             # require a token on API requests
nebo serve --read public --write private  # default access modes when token is set

Run a pipeline

nebo run my_pipeline.py
nebo run my_pipeline.py --name "experiment-1"

Load a .nebo file

# local daemon
nebo load .nebo/2026-04-06_143000_run-1.nebo

# remote daemon (e.g. an HF Space) — events read locally, replayed via /events
nebo load run.nebo --url https://user-space.hf.space --api-token nb_…

Deploy the daemon to a Hugging Face Space

pip install 'nebo[deploy]'
huggingface-cli login

# Public dashboard, private writes (defaults). Random token printed once.
nebo deploy --space-id <user>/nebo-test --from-source

# Fully private (read + write require token)
nebo deploy --space-id <user>/private-dash --read private --write private

After the Space builds, point the SDK at it:

import nebo as nb
nb.init(url="https://<user>-nebo-test.hf.space", api_token="nb_…")
# or set NEBO_URL / NEBO_API_TOKEN in the environment.

Check status, logs, errors

nebo status
nebo logs
nebo logs --run experiment-1 --node train --limit 50
nebo errors
nebo errors --run experiment-1

Stop the daemon

nebo stop

MCP integration

nebo mcp   # print Claude Code MCP config

MCP Tools for AI Agents

Nebo exposes 21 MCP tools for querying, controlling, and writing data into pipelines from an AI agent (e.g., Claude). The daemon server must be running.

Observation Tools

Tool Description
nebo_get_graph Full DAG structure: nodes, edges, execution counts
nebo_get_loggable_status Detailed status for one loggable: logs, metrics, errors, params
nebo_get_logs Recent log entries, filterable by loggable and run
nebo_get_metrics Metric time series for a loggable
nebo_get_errors All errors with full tracebacks and node context
nebo_get_description Workflow description and all node docstrings

Action Tools

Tool Description
nebo_run_pipeline Start a pipeline script, returns a run ID
nebo_stop_pipeline Stop a running pipeline by run ID
nebo_restart_pipeline Stop and re-run a pipeline with same args
nebo_get_run_status Status of a specific run (running/completed/crashed) plus metrics_index for one-call metric discovery
nebo_get_run_history List all runs with outcomes and timestamps
nebo_get_source_code Read a pipeline source file
nebo_write_source_code Write or patch a pipeline source file
nebo_ask_user Send a question to the user via the terminal
nebo_wait_for_event Block until a pipeline event occurs or timeout elapses
nebo_load_file Load a .nebo file into the daemon for viewing and Q&A
nebo_chat Ask a question about a run (delegates to Claude Code CLI)

Write Tools

These mirror the SDK's nb.log_* helpers so an external agent can push data into a run without owning the SDK process. Each accepts a single entry or a list. URL-based media is fetched server-side and persisted alongside the run.

Tool Description
nebo_log_metric Push metric points (line / bar / pie / scatter / histogram)
nebo_log_image Push images by url or data (base64)
nebo_log_audio Push audio recordings by url or data, with optional sr
nebo_log_text Push text log entries (defaults to the global loggable)

.nebo File Format

Runs are persisted as .nebo binary files using MessagePack serialization. Each file contains a header (magic, version, metadata) followed by append-only event entries. Use nebo load to replay a file into the daemon.

Architecture

graph LR
    A[Your Python Pipeline] --> B[Nebo SDK<br>@fn, log, track, ...]
    B --> C[Daemon Server<br>FastAPI, port 7861]
    B --> D[Terminal Dashboard<br>Rich]
    C --> E[CLI<br>nebo]
    C --> F[MCP Tools<br>Claude]
    C --> G[Web UI]

Two execution modes:

  • Local mode (default): In-process only. No daemon needed.
  • Server mode: Events stream to a persistent daemon via HTTP. Use nebo serve to start the daemon, then nebo run to execute pipelines.

The daemon can run on your laptop, in CI, or on a Hugging Face Space (nebo deploy). The same SDK code works against any of them — set NEBO_URL and NEBO_API_TOKEN to point at the target. When the daemon enforces auth, every API request must carry the token via the X-Nebo-Token header (HTTP) or the ?token=… query param (browsers / WebSocket).

API Reference

Module: nebo

Function Signature Description
fn @fn(), @fn(depends_on=[...]), @fn(ui={...}) Register a function/class as a DAG node
log log(message: str) Log a text message
log_line log_line(name, value, *, step=None, tags=None) Log a scalar line-chart datapoint
log_bar log_bar(name, value) Bar-chart snapshot ({label: number}); overwrites
log_pie log_pie(name, value) Pie-chart snapshot ({label: number}); overwrites
log_scatter log_scatter(name, value, *, step=None, tags=None, colors=False) Labeled scatter ({label: list[(x, y)]}); accumulates, step auto-increments
log_histogram log_histogram(name, value, *, colors=False) Labeled histogram snapshot ({label: list[number]}); overwrites
log_cfg log_cfg(cfg: dict) Log node configuration
log_image log_image(image, *, name=None, step=None, points=None, boxes=None, circles=None, polygons=None, bitmask=None) Log an image (optionally with geometric labels)
log_audio log_audio(audio, sr=16000, name=None, step=None) Log audio data
track track(iterable, name=None, total=None) Progress tracking
md md(description: str) Set workflow description
ui ui(layout, view, collapsed, minimap, theme) Set run-level UI defaults
init init(port, host, mode, terminal, dag_strategy, flush_interval, store, url=None, api_token=None) Manual initialization. Pass url+api_token (or set NEBO_URL/NEBO_API_TOKEN env vars) to target a remote daemon
ask ask(question, options=None, timeout=None) Human-in-the-loop prompt
show show(*, run=None, node=None, metric=None, image=None, audio=None, logs=False, dag=False, width="100%", height=600) Jupyter-renderable iframe of a slice of a run
get_state get_state() -> SessionState Access the global state singleton

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nebo-0.2.5.tar.gz (31.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nebo-0.2.5-py3-none-any.whl (721.2 kB view details)

Uploaded Python 3

File details

Details for the file nebo-0.2.5.tar.gz.

File metadata

  • Download URL: nebo-0.2.5.tar.gz
  • Upload date:
  • Size: 31.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for nebo-0.2.5.tar.gz
Algorithm Hash digest
SHA256 5535959068c8aa2d5f2747da9e1ef9a735a41ba5f0f3c1ef67efaa8ffdfd363c
MD5 334ded0214c2e5e028864ad8fdb12e70
BLAKE2b-256 baaaf5c5dc739750689c6857942d67b4c4e93ad798bb5f2168ef2c903fa90867

See more details on using hashes here.

Provenance

The following attestation bundles were made for nebo-0.2.5.tar.gz:

Publisher: release.yml on graphbookai/nebo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file nebo-0.2.5-py3-none-any.whl.

File metadata

  • Download URL: nebo-0.2.5-py3-none-any.whl
  • Upload date:
  • Size: 721.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for nebo-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 c4976b6f34f170203ecc2b675840af78fbfafba285a72fcc4dfa490a3aa615fe
MD5 c29f64b01938d7adc3e66cb53940b0a8
BLAKE2b-256 f2d3a885b571eb3cb122a962243f387e6eb6dd01ecd8aaeecfe70c54a010b945

See more details on using hashes here.

Provenance

The following attestation bundles were made for nebo-0.2.5-py3-none-any.whl:

Publisher: release.yml on graphbookai/nebo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page