Execution graph debugger — see what your agent / pipeline actually does.
Project description
See what your agent actually does.
One decorator, one HTML file — a visual execution graph instead of logs.
from flow_xray import trace
@trace
def call_llm(prompt):
return openai.chat(prompt)
@trace
def agent(query):
plan = call_llm(f"plan: {query}")
return call_llm(f"answer based on: {plan}")
result = trace.run(agent, "weather in Tokyo?")
result.to_html("trace.html")
Open trace.html — you get a DAG of every traced step with inputs, outputs, latency, tokens, cost, and errors. Click a node to inspect. No server, no account, no log viewer — one local file.
Install
pip install flow-xray
Usage
Decorator + trace.run
from flow_xray import trace
@trace
def step_a(x):
return x + 1
@trace
def pipeline(x):
return step_a(x) * 2
result = trace.run(pipeline, 5)
result.to_html("pipeline.html")
CLI
flow-xray run my_agent.py --html trace.html
The script must use @trace on the functions you want captured. The CLI provides the session; just call your functions normally.
Token / cost tracking
Token usage and estimated cost are auto-extracted from OpenAI response objects, or you can set them manually:
@trace
def call_llm(prompt):
resp = openai.chat.completions.create(...)
trace.meta(model=resp.model,
prompt_tokens=resp.usage.prompt_tokens,
completion_tokens=resp.usage.completion_tokens)
return resp.choices[0].message.content
What you see
- Nodes = function calls (name + latency + tokens)
- Edges = caller → callee
- Green = OK, Red = error, Yellow = slow (>1s)
- Header = total nodes, latency, tokens, estimated cost
- Click a node → side panel shows inputs, output, error, timing, model, tokens, cost
Why this exists
Langfuse, Helicone, LangSmith — they give you timelines and logs.
But when your agent pipeline branches, retries, or chains 6 tools — you don't need another table. You need a graph.
flow-xray is not an agent framework. It's the layer below them — like Chrome DevTools is to browsers.
How it works
@trace wraps functions. When called inside a trace.run() session (or flow-xray run CLI), it records:
- function name
- bound arguments
- return value or exception
- wall-clock latency
- token usage and estimated cost (auto or manual)
- parent/child relationships (call stack → DAG)
result.to_html() embeds the trace as JSON in a self-contained HTML page that renders via WASM Graphviz (CDN, works offline after first load).
Also included
Scalar autodiff core (micrograd-style Value graph with DOT/JSON export and stepping debugger) lives under flow-xray dot CLI and from flow_xray import Value. See examples/ and plan.md.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file flow_xray-0.2.1.tar.gz.
File metadata
- Download URL: flow_xray-0.2.1.tar.gz
- Upload date:
- Size: 16.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
62042b27cba82a9c98c7b07ab72bf1ea21f185c67215b76ca6146f6a8b1b94f6
|
|
| MD5 |
1cfc6aae33d6c65cc8d7abf54eee3bf8
|
|
| BLAKE2b-256 |
aa342046a2c4f5879f1cd961525bcad75fb2d0f1434ac71e66c160024013f68e
|
File details
Details for the file flow_xray-0.2.1-py3-none-any.whl.
File metadata
- Download URL: flow_xray-0.2.1-py3-none-any.whl
- Upload date:
- Size: 21.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3aba476afc60c76fecf726520c796343df7d50e7aa0a8ac4b20b28c3c3eb07e0
|
|
| MD5 |
0c80eea055bbbe71a03c026f0d9a61b1
|
|
| BLAKE2b-256 |
4ef6aeb11502deabb3ac2cc60cb19ee55c0724a791798bbe8271a587f6afad21
|