Skip to main content

The interactive debugger for LLM agents. Pause, inspect, and fork any agent mid-run.

Project description

    _                     _        _
   | |                   | |      | |
   | |     ___ _ __  ___  | |_ ___| |
   | |    / _ \ '_ \/ __| | __/ __| |
   | |___|  __/ | | \__ \ | |_\__ \_|
   |______\___|_| |_|___/  \__|___(_)

The interactive debugger for AI agents.

PyPI CI License: MIT Python 3.10+ Coverage


Demo GIFPause a live agent, edit its messages, fork a new run, compare results. No restarts.

(Demo recording in progress — see docs/demo-storyboard.md)


The Problem

Your LLM agent fails at step 4. You suspect the issue is at step 2. To test a fix, you edit the code, restart the agent, wait through steps 1–3 again, and check. Ten hypotheses = ten restarts. A 30-second agent = five minutes of waiting just to test one idea.

Every observability tool (LangSmith, Langfuse, Phoenix, AgentOps) shows you what happened after the fact. None of them let you pause the agent mid-run, edit its state, and fork a new execution from that exact point.

agent-lens does.

Install and Use (5 lines)

pip install agentlens-tracer
import agent_lens
from openai import OpenAI

agent_lens.install()          # auto-patch OpenAI + Anthropic
agent_lens.dashboard.start()  # open dashboard at localhost:7878

client = OpenAI()

@agent_lens.trace
def my_agent(query: str) -> str:
    return client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": query}]
    ).choices[0].message.content

my_agent("What are the key features of Python 3.12?")

Dashboard opens automatically. All LLM calls are traced.

What You Get

  • Zero infrastructure — SQLite database at ~/.agent-lens/runs.db. No Docker. No cloud account. No API keys for the tool itself.
  • Real-time dashboard — Span tree, flame graph timeline, message inspector. Updates live via SSE as your agent runs.
  • Any framework — OpenAI, Anthropic, LangChain via callback handler, or any Python function via @trace.

Pause and Fork — The Killer Feature

[Agent running] → click Pause → agent blocks at next LLM call
                                ↓
                          [Edit messages in dashboard]
                                ↓
                          click Fork → new run diverges from this point
                                ↓
                          click Resume → original continues
                                ↓
         [Two runs, different paths, side by side in dashboard]

No restarts. No re-running preceding steps. Change one variable, see what diverges.

Via the API:

from agent_lens.control import ControlPlane

cp = ControlPlane.get_instance()
cp.pause(run_id)

new_run_id = cp.fork(
    run_id=run_id,
    span_id=span_id,
    edited_messages=[
        {"role": "user", "content": "Different question"}
    ]
)

cp.resume(run_id)

Comparison

Feature agent-lens Langfuse LangSmith
Local-first (no cloud) Partial (self-host)
Pause live agent
Fork from any point
Real-time dashboard
Multi-framework Partial
Data stays on machine
Zero-infra setup
Secret redaction Partial Partial

Compatibility

  • Python 3.10, 3.11, 3.12
  • OpenAI SDK ≥ 1.0
  • Anthropic SDK ≥ 0.20
  • LangChain ≥ 0.1 (optional)
  • macOS, Linux, Windows

FAQ

Does it work without OpenAI or Anthropic? Yes. Use @agent_lens.trace on any Python function. The SDK integrations are optional.

Does my data leave my machine? No. All data is stored in ~/.agent-lens/runs.db. No telemetry, no callbacks, no network egress.

Is it production-safe? It's designed for development and debugging. The overhead is < 5ms per traced call. The dashboard server binds to 127.0.0.1 only — it's not exposed to the network.

What happens when I restart the dashboard? Traces persist in SQLite. Reload the dashboard and your previous runs are still there.

Can I share a trace with a colleague? Yes: agent-lens export <run_id> --output trace.html generates a self-contained HTML file you can email or share via any file-sharing tool.

Contributing

See CONTRIBUTING.md. All contributions welcome.

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentlens_tracer-0.1.0-py3-none-any.whl (42.1 kB view details)

Uploaded Python 3

File details

Details for the file agentlens_tracer-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for agentlens_tracer-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 664c254e1a664d73c8e11fba8c2aaf65ec8c3b29237e3efd70322caea454c600
MD5 6141eb9d125c3cc894620dcf20b1a8ab
BLAKE2b-256 bbcf886c4cf0dbd00f2d3be9203e2501e8e6f6b652b108a2a1166d6c0fee040c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page