Agent observability: trace any AI agent and store runs in MongoDB (and more)
Project description
AGENT SNOOP 🔍
Lightweight agent observability for any AI agent framework. Capture every step, tool call, and LLM invocation — store them in your own MongoDB or on liten.tech.
What does it do?
agent_snoop sits alongside your agent and records:
| What | Details |
|---|---|
| Steps | Every LLM call, tool invocation, and agent action |
| Inputs / outputs | What went in and what came out at every step |
| Tool calls | Name, arguments, return value |
| Token usage | Per-step and aggregated for the full run |
| Timing | Start time, end time, and duration at every level |
| Full trajectory | Ordered list of all steps for the entire invocation |
Everything is stored as a single document per invocation, making it easy to query, visualise, and debug.
Installation
pip install agent-snoop[mongo,langgraph]
Quick start — 3 minutes
Step 1 — Pick your storage
You have two options. Choose one (or both):
Option A — Store traces in your own MongoDB (full data ownership)
If you already have a MongoDB instance (local, Atlas, or any hosted provider):
export MONGODB_URI="mongodb+srv://user:password@cluster.example.mongodb.net/"
agent_snoop will automatically create an agentsnoop_db database and a traces collection. Your data never leaves your infrastructure.
Option B — Store traces on liten.tech (zero infra, instant dashboard)
- Sign up at liten.tech/signup
- Go to Dashboard → Settings → API Keys and create a new key
- Export it:
export AGENTSNOOP_API_KEY="as_your_key_here"
If you set both,
AGENTSNOOP_API_KEYtakes priority.
Step 2 — View your traces on liten.tech
Go to liten.tech/dashboard and connect your storage:
- If you used Option A (your MongoDB): go to Settings → Connect Database and paste the same URI. liten.tech will read traces directly from your database — your data stays where it is.
- If you used Option B (API key): your traces are already there. Just sign in.
Step 3 — Add two lines to your agent code
import agent_snoop
from agent_snoop.integrations.langgraph import AgentSnoopCallbackHandler
# Reads MONGODB_URI or AGENTSNOOP_API_KEY automatically
tracer = agent_snoop.init(agent_name="my-agent", framework="langgraph")
query = "Your question here"
handler = AgentSnoopCallbackHandler(
handle=tracer.trace(input=query)
)
result = await graph.ainvoke(
{"messages": [HumanMessage(content=query)]},
config={"callbacks": [handler]},
)
handler.on_chain_end_final(result)
That's it. Open liten.tech/dashboard to see your traces.
Integration styles
Callback-based (recommended for LangGraph)
Captures every node, tool call, and LLM invocation in real time:
import agent_snoop
from agent_snoop.integrations.langgraph import AgentSnoopCallbackHandler
from langchain_core.messages import HumanMessage
tracer = agent_snoop.init(agent_name="my-agent", framework="langgraph")
query = "What caused the 2008 financial crisis?"
handler = AgentSnoopCallbackHandler(handle=tracer.trace(input=query, tags=["prod"]))
result = await graph.ainvoke(
{"messages": [HumanMessage(content=query)]},
config={"callbacks": [handler]},
)
handler.on_chain_end_final(result)
Post-run (zero agent changes)
Run your graph exactly as before, then hand the output to agent_snoop:
from agent_snoop.integrations.langgraph import parse_langgraph_output
result = await graph.ainvoke({"messages": [HumanMessage(content=query)]})
trace = parse_langgraph_output(result, input=query, agent_name="my-agent")
tracer.log_trace(trace)
Manual context manager (full control)
with tracer.trace(input=query, tags=["prod"]) as t:
result = my_agent.run(query)
t.set_output(result)
t.set_metadata(user_id="u123")
What gets stored
Each trace is a single MongoDB document:
{
"_id": "550e8400-e29b-...",
"agent_name": "my-research-agent",
"framework": "langgraph",
"input": "What caused the 2008 financial crisis?",
"output": "The 2008 financial crisis was caused by...",
"status": "success",
"started_at": "2024-01-15T10:23:00Z",
"ended_at": "2024-01-15T10:23:04Z",
"duration_ms": 4021,
"total_token_usage": { "prompt_tokens": 812, "completion_tokens": 234, "total_tokens": 1046 },
"tags": ["prod"],
"steps": [
{
"step_index": 0,
"step_type": "llm_call",
"node_name": "researcher",
"duration_ms": 1823,
"token_usage": { "prompt_tokens": 412, "completion_tokens": 134, "total_tokens": 546 },
"tool_calls": [
{
"tool_name": "web_search",
"tool_input": { "query": "2008 financial crisis causes" },
"tool_output": "...",
"duration_ms": 341
}
]
}
]
}
Framework support
| Framework | Status |
|---|---|
| LangGraph | ✅ Supported (callback + post-run) |
| AutoGen | 🔜 Coming soon |
| CrewAI | 🔜 Coming soon |
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_snoop-0.1.0.tar.gz.
File metadata
- Download URL: agent_snoop-0.1.0.tar.gz
- Upload date:
- Size: 23.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
130f6d2df9a3ff05d0325fb0ad558785c5a8a242e44402858dc3f4c2a48ae1f5
|
|
| MD5 |
6c1a9a96773fb2d9a01e6d05c6c7bb51
|
|
| BLAKE2b-256 |
ee7a4883199d8f30644aba5871dcb19045d691cc35814795cd8f3bf07e18c724
|
File details
Details for the file agent_snoop-0.1.0-py3-none-any.whl.
File metadata
- Download URL: agent_snoop-0.1.0-py3-none-any.whl
- Upload date:
- Size: 19.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f80a5c36e46809da4a28dacb334bff5507efd5745f8d8d1ee32254a8df3e5fb3
|
|
| MD5 |
26d05a84a84eb1e774c32a1778bb2a66
|
|
| BLAKE2b-256 |
466e0f3cd5b53ea24fc217eee7be4db99015abc06ca5d72d3e68e83a0188cf7c
|