Skip to main content

Tracing and graph extraction for agents

Project description

AgentGlass

A local, step-through visual debugger for LangGraph + Google Gemini agents.

AgentGlass renders your agent's execution as an interactive graph — live, in your browser, with zero cloud dependency. Step forward and backward through every node execution, inspect exact input/output state at each step, and see inside nodes: the LLM call, its token counts, model name, and every tool call it made. One with block. No login. No data leaves your machine.

AgentGlass screenshot


Features

  • Step-through debugger — IDE-style ◀ ▶ controls to walk through the execution one node at a time. Click any dot on the timeline to jump to any step.
  • Live graph — nodes highlight blue (active), green (done), red (errored) as the run progresses. Edges trace the path taken.
  • Click-to-inspect — click any node to see the exact state that went in and the update that came out.
  • Nested sub-runs — expand inside agent or tools to see the LLM call, token counts, model name, and tool calls — like LangSmith's waterfall, but local.
  • Gemini-aware formatting — message arrays render as a chat transcript with role badges, function-call/response pairing, and finish-reason warnings.
  • Zero egress — runs entirely on localhost. Nothing is sent anywhere.

Installation

AgentGlass uses uv for dependency management.

# Add to your project
uv add agentglass

# Or install globally
uv tool install agentglass

From source:

git clone https://github.com/yourname/agentglass
cd agentglass
uv sync
uv pip install -e .

Dependencies (installed automatically): fastapi, uvicorn, langchain-core, langgraph, websockets.


Quickstart

Wrap your compiled graph in trace(). That's the entire integration.

from agentglass import trace

compiled = your_graph.compile()

with trace(compiled, port=8765):
    result = compiled.invoke({"input": "..."})

A browser tab opens at http://localhost:8765. Step through the run, click nodes, inspect state. Press Ctrl-C when done.


Examples

Mock agent (no API key needed)

Good for exploring the UI without any credentials. Simulates a Gemini-style tool-calling loop.

uv run examples/mock_agent.py
# examples/mock_agent.py
from agentglass import trace
from langgraph.graph import StateGraph, END
# ... build a graph with planner → tool_executor → planner loop

with trace(compiled, port=8765):
    compiled.invoke(initial_state)

The mock agent runs a planner → tool_executor → planner loop twice, so you can see the visit-count badges and step through all 5 executions.


Real Gemini agent

export GOOGLE_API_KEY=your-key
uv run examples/gemini_react_agent.py
# examples/gemini_react_agent.py
import os
from agentglass import trace
from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode

model = ChatGoogleGenerativeAI(model="gemini-2.5-flash", temperature=0)
# ... define tools, nodes, edges ...

with trace(compiled, port=8765):
    result = compiled.invoke({
        "messages": [HumanMessage(content="What's the weather in Tokyo and London?")]
    })

Clicking the agent node at any step shows the nested LLM call — model name (gemini-2.5-flash), token counts (↑406 ↓333 = 739), and the tool calls it requested. Clicking tools shows each tool that ran and what it returned.


Existing project (drop-in)

If you have an existing LangGraph agent, no changes to your node functions are required:

from agentglass import trace

# your existing code — untouched
app = graph.compile()

# wrap it
with trace(app, port=8765):
    for step in app.stream(state, stream_mode="values"):
        print_messages(step.get("messages", []))

How it works

your agent                AgentGlass               browser
──────────                ──────────               ───────
graph.compile()  ──────►  extract graph structure
                          attach callback tracer
compiled.invoke() ──────► on_chain_start  ────────► WebSocket event
                          on_chat_model_start        → node lights up
                          on_llm_end                 → LLM child recorded
                          on_tool_start/end           → tool child recorded
                          on_chain_end  ──────────►  WebSocket event
                                                     → node turns green
                                                     → step available
user clicks node  ◄────────────────────────────────  REST /api/events

The tracer uses a node stack to associate sub-runs (LLM calls, tool executions, condition functions) with their parent node — this works correctly even on LangGraph 1.x, which reuses the same run_id across an entire invocation.


UI walkthrough

Control Action
/ Step forward / backward
/ Jump to first / last step
keyboard Step forward / backward
Home / End Jump to first / last step
Click a timeline dot Jump to that step
Click a node in the graph Jump to that node's most recent execution
Click on a sub-run row Expand input / output / tool args

Roadmap

These are not in the current release but are planned or under consideration:

  • State diff view — between consecutive visits to the same node, highlight exactly which keys changed
  • Time-travel — pause at a node, edit state, resume
  • Multi-run comparison — load two runs side by side and see where they diverged
  • Persistence — SQLite backend so runs survive server restarts; a history page
  • Cost / latency overlay — token cost and wall-clock time badges on each node
  • Export — bundle a run into a self-contained HTML file for sharing
  • Framework expansion — CrewAI, raw LangChain Runnable graphs

Contributing

Contributions are welcome! AgentGlass is designed with clear separation of concerns — most changes touch only one component.

# Quick setup
git clone https://github.com/subhranil2605/agentglass
cd agentglass
uv sync
uv run pytest tests/ -v

See CONTRIBUTING.md for:

  • Development setup and architecture overview
  • Code style and testing guidelines
  • How to report bugs and propose features
  • Pull request process

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentglass-0.1.2.tar.gz (356.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentglass-0.1.2-py3-none-any.whl (23.8 kB view details)

Uploaded Python 3

File details

Details for the file agentglass-0.1.2.tar.gz.

File metadata

  • Download URL: agentglass-0.1.2.tar.gz
  • Upload date:
  • Size: 356.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for agentglass-0.1.2.tar.gz
Algorithm Hash digest
SHA256 50b1b5392fbeacd58119e057084a026e6ccccca7c92d3e5094cb9b8c666961b3
MD5 17872854292a9e4481e5ce4f9751977c
BLAKE2b-256 a1331a4fe314c231e3d9d2e6820b4840f4082f4a9267ca9853862cb7f0de3442

See more details on using hashes here.

File details

Details for the file agentglass-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: agentglass-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 23.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for agentglass-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 59d22d21f3abd4102290bdb92ea9076aad20f660fc766c10c9be44254a16675c
MD5 9c2d8116b5a6a3d0796997ae1138fe1c
BLAKE2b-256 cb4d4691d2e3675bf13944a42dec04448db8adc77c9bca9287579ffaffb2bd8f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page