Zero-dependency local AI agent decision tracer. Records every step an AI agent takes — what it saw, what it decided, why.
Project description
ai-trace
Zero-dependency local AI agent decision tracer.
Records every step an AI agent takes — what it saw, what it decided, and why. JSON + Markdown output. No network calls. No cloud. Entirely local.
Part of the AI Agent Infrastructure Stack:
- ai-cost-guard — hard budget caps before the LLM call
- ai-injection-guard — prompt injection scanner
- ai-trace — local decision tracer ← you are here
Install
pip install ai-decision-tracer
No dependencies. Pure Python stdlib.
Quickstart
from ai_trace import Tracer
tracer = Tracer("trading_agent", meta={"model": "claude-haiku-4-5"})
with tracer.step("market_scan", symbol="BTCUSDT") as step:
signal = analyze(market_data)
step.log(signal=signal, confidence=0.87)
with tracer.step("decision", signal=signal) as step:
action = decide(signal)
step.log(action=action, reason="SuperTrend bullish + volume spike")
# Save full trace
tracer.save() # → traces/trading_agent_20240301_143022.json
tracer.save_markdown() # → traces/trading_agent_20240301_143022.md
Why this exists
My trading bot made a bad trade at 3AM. Lost money. I had logs — thousands of lines of print() spam — but I couldn't answer the basic question: what did the agent see at the moment it decided to enter that position?
Was it a bad signal? A stale data feed? A prompt injection that slipped past the scanner? Without a structured decision trace, postmortems are guesswork.
ai-trace records every step an AI agent takes — what it saw, what it decided, why, and how long it took. JSONL auto-save (survives crashes), Markdown reports (human-readable), and a CLI for quick inspection. No cloud. No external service. Everything stays local.
Features
| Feature | Details |
|---|---|
| Zero dependencies | Pure Python 3.8+ stdlib |
| Context manager | with tracer.step("name", **ctx) as step: |
| Auto-save | Appends each step to JSONL as it completes |
| Atomic writes | JSON/Markdown via temp file + rename — no partial output |
| CLI viewer | ai-trace view, ai-trace tail, ai-trace stats |
| Error capture | Full traceback captured on exception, step marked as error |
| Metadata | Attach model name, version, run ID to the session |
API
Tracer
Tracer(
agent: str, # agent name — used in filenames
trace_dir: str, # where to write files (default: "traces")
auto_save: bool, # append to JSONL after each step (default: True)
meta: dict, # session-level metadata (model, version, etc)
)
step(name, **context)
with tracer.step("classify", input_text=text[:50]) as step:
result = model.classify(text)
step.log(label=result.label, confidence=result.score)
Or manually:
step = tracer.step("scan")
step.start()
step.log(markets_scanned=142)
step.finish() # or step.fail(reason="timeout")
Save
tracer.save() # → JSON (all steps + metadata)
tracer.save_markdown() # → human-readable Markdown summary
tracer.summary() # → dict: steps, ok, errors, avg_duration_ms
CLI
# List all trace sessions
ai-trace list
# View a specific session
ai-trace view trading_agent_20240301_143022.jsonl
# Live tail the latest trace
ai-trace tail -n 20
# Stats across all sessions
ai-trace stats
Custom directory:
ai-trace --dir /var/log/agent/traces list
Output formats
JSONL (auto-saved, one line per step)
{"name": "market_scan", "context": {"symbol": "BTCUSDT"}, "outcome": "ok", "duration_ms": 142.3, "logs": [{"_t": 1709300422.1, "signal": 0.87}]}
JSON (full session snapshot)
{
"agent": "trading_agent",
"session_id": "20240301_143022",
"meta": {"model": "claude-haiku-4-5"},
"steps": [...]
}
Markdown (human-readable)
# Trace: trading_agent — 20240301_143022
## Summary
| Steps | OK | Errors | Avg duration |
|---|---|---|---|
| 4 | 3 | 1 | 127.4 ms |
## Steps
### 1. ✅ `market_scan` (142.3 ms)
**Context:**
- `symbol`: 'BTCUSDT'
**Logs:**
- `14:30:22.100Z` — `signal=0.87`
Use with other stack libraries
from ai_cost_guard import CostGuard
from ai_injection_guard import PromptScanner
from ai_trace import Tracer
guard = CostGuard(weekly_budget_usd=5.00)
scanner = PromptScanner(threshold="MEDIUM")
tracer = Tracer("agent", meta={"model": "claude-haiku-4-5"})
@guard.protect(model="anthropic/claude-haiku-4-5-20251001")
@scanner.protect(arg_name="prompt")
def call_llm(prompt):
with tracer.step("llm_call", prompt_len=len(prompt)) as step:
response = client.messages.create(...)
step.log(tokens=response.usage.input_tokens)
return response
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ai_decision_tracer-0.2.0.tar.gz.
File metadata
- Download URL: ai_decision_tracer-0.2.0.tar.gz
- Upload date:
- Size: 18.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
66977e8c4d378f0f96ed79b675024d911427781984f2883ab62b73a26f9cd724
|
|
| MD5 |
a01f788ea8d489a26e290dff973eb00b
|
|
| BLAKE2b-256 |
c5d0893f5ecf92846a7a2283040b8a3a4003dee95b5ae085a81183b0eb825539
|
File details
Details for the file ai_decision_tracer-0.2.0-py3-none-any.whl.
File metadata
- Download URL: ai_decision_tracer-0.2.0-py3-none-any.whl
- Upload date:
- Size: 17.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3f4e0a2e128e2685651a435e6006ef8d6eaea772d3292cc52b46d4f51b76456f
|
|
| MD5 |
09c24e67b1cd7a82c9dafd9ae14b3c5b
|
|
| BLAKE2b-256 |
a69b0df6f5397f76059147cfa39ebc75e7850e27abcd24a88d664d1f7e154d61
|