Skip to main content

Execution graph debugger — see what your agent / pipeline actually does.

Project description

flow-xray

See what your agent actually does.
One decorator, one HTML file — a visual execution graph instead of logs.

PyPI License Python


from flow_xray import trace

@trace
def call_llm(prompt):
    return openai.chat(prompt)

@trace
def agent(query):
    plan = call_llm(f"plan: {query}")
    return call_llm(f"answer based on: {plan}")

result = trace.run(agent, "weather in Tokyo?")
result.to_html("trace.html")

flow-xray trace viewer

Open trace.html — you get a DAG of every traced step with inputs, outputs, latency, tokens, cost, and errors. Click a node to inspect. No server, no account, no log viewer — one local file.

Install

pip install flow-xray

Usage

Decorator + trace.run

from flow_xray import trace

@trace
def step_a(x):
    return x + 1

@trace
def pipeline(x):
    return step_a(x) * 2

result = trace.run(pipeline, 5)
result.to_html("pipeline.html")

CLI

flow-xray run my_agent.py --html trace.html

The script must use @trace on the functions you want captured. The CLI provides the session; just call your functions normally.

Async support

@trace works with async def out of the box — no extra config:

from flow_xray import trace
import asyncio

@trace
async def call_api(query):
    await asyncio.sleep(0.1)  # simulate async I/O
    return {"answer": query}

@trace
async def agent(query):
    result = await call_api(query)
    return result["answer"]

result = trace.run(lambda: asyncio.run(agent("hello")))
result.to_html("async_trace.html")

Token / cost tracking

Token usage and estimated cost are auto-extracted from OpenAI response objects, or you can set them manually:

@trace
def call_llm(prompt):
    resp = openai.chat.completions.create(...)
    trace.meta(model=resp.model,
               prompt_tokens=resp.usage.prompt_tokens,
               completion_tokens=resp.usage.completion_tokens)
    return resp.choices[0].message.content

What you see

  • Nodes = function calls (name + latency + tokens)
  • Edges = caller → callee
  • Green = OK, Red = error, Yellow = slow (>1s)
  • Header = total nodes, latency, tokens, estimated cost
  • Click a node → side panel shows inputs, output, error, timing, model, tokens, cost

Why this exists

Langfuse, Helicone, LangSmith — they give you timelines and logs.

But when your agent pipeline branches, retries, or chains 6 tools — you don't need another table. You need a graph.

flow-xray is not an agent framework. It's the layer below them — like Chrome DevTools is to browsers.

Compatibility

  • Python 3.10, 3.11, 3.12, 3.13, 3.14 — tested
  • Sync and async functions — both supported
  • Any Python code — not limited to LLM calls; works with any function you decorate
  • Frameworks — works alongside LangGraph, CrewAI, OpenAI SDK, or plain Python

How it works

@trace wraps functions (sync and async). When called inside a trace.run() session (or flow-xray run CLI), it records:

  • function name
  • bound arguments
  • return value or exception
  • wall-clock latency
  • token usage and estimated cost (auto or manual)
  • parent/child relationships (call stack → DAG)

result.to_html() embeds the trace as JSON in a self-contained HTML page that renders via WASM Graphviz (CDN, works offline after first load).

Also included

Scalar autodiff core (micrograd-style Value graph with DOT/JSON export and stepping debugger) lives under flow-xray dot CLI and from flow_xray import Value. See examples/ and plan.md.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flow_xray-0.2.2.tar.gz (17.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flow_xray-0.2.2-py3-none-any.whl (21.7 kB view details)

Uploaded Python 3

File details

Details for the file flow_xray-0.2.2.tar.gz.

File metadata

  • Download URL: flow_xray-0.2.2.tar.gz
  • Upload date:
  • Size: 17.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.4

File hashes

Hashes for flow_xray-0.2.2.tar.gz
Algorithm Hash digest
SHA256 f40d22def36478add2452366e659efe4186bb0032b538253b1e33f132d51298c
MD5 81421a636ac2b5710fc9d6fcd47d7035
BLAKE2b-256 df0b0032a6c795694a47a33c153b50cd5c35caa00159e38bfe96b2515e8238d9

See more details on using hashes here.

File details

Details for the file flow_xray-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: flow_xray-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 21.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.4

File hashes

Hashes for flow_xray-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 bff015c8c0657b201cf4ab75cf1cbf7b9bb57e0d310e5df1c16fe67db0f3f5ca
MD5 cd2466e9da7bbc1f9ecd6620a6e7c6e0
BLAKE2b-256 02546d2c8d141546acfbd0ee1c6cbaece2360b218293c06d0fc4b80792544b79

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page