Skip to main content

See what your agent thinks โ€” AI agent execution tracer

Project description

๐Ÿ” Drishti (เคฆเฅƒเคทเฅเคŸเคฟ)

See what your agent thinks.

CI PyPI Python License


Drishti automatically captures, visualizes, and exports traces of AI agent execution. Add one decorator โ€” see every LLM call with tokens, cost, and latency. Zero code changes to your agent logic.

from drishti import trace

@trace(name="my-agent")
def run_agent(query):
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": query}],
    )
    return response.choices[0].message.content
๐Ÿ” Drishti Trace โ€” my-agent
โ”œโ”€โ”€ โœ… [1] openai/gpt-4o-mini   312 tokens  $0.0001  124ms
โ””โ”€โ”€ โœ… [2] openai/gpt-4o        891 tokens  $0.0089  387ms

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Summary โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ Total Tokens  1203              โ”‚
โ”‚ Total Cost    $0.0090 USD       โ”‚
โ”‚ Wall Time     511ms             โ”‚
โ”‚ LLM Calls     2                 โ”‚
โ”‚ Status        SUCCESS           โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

โœจ Features

  • ๐Ÿ”Œ Zero-config auto-detection โ€” OpenAI, Anthropic, Groq, Ollama intercepted automatically
  • ๐ŸŒณ Rich terminal tree โ€” every LLM call with tokens, cost, and latency at a glance
  • ๐Ÿ’พ JSON export โ€” full traces saved to .drishti/traces/ for sharing, diffing, and replaying
  • ๐Ÿ–ฅ๏ธ CLI tool โ€” drishti version, drishti list, drishti view, drishti diff, drishti stats, drishti export, drishti replay, drishti clear
  • ๐Ÿ’ฐ Cost tracking โ€” real-time pricing for 15+ models across 4 providers
  • ๐Ÿ›ก๏ธ Budget guard โ€” warn when cost exceeds a threshold
  • โšก Async support โ€” works with async def functions out of the box
  • ๐Ÿ”’ Thread-safe โ€” correct isolation for concurrent agents via thread-local + ContextVar
  • ๐Ÿชถ Zero overhead โ€” pure passthrough when no @trace context is active

๐Ÿš€ Quickstart

Install

pip install drishti-ai[openai]        # OpenAI support
# or
pip install drishti-ai[anthropic]     # Anthropic support
# or
pip install drishti-ai[all]           # All providers

Trace Your Agent

from drishti import trace
import openai

client = openai.OpenAI()

@trace(name="research-agent")
def research_agent(query: str) -> str:
    # Step 1: Generate search queries
    plan = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": "Generate 3 search queries for the topic."},
            {"role": "user", "content": query},
        ],
    )
    queries = plan.choices[0].message.content

    # Step 2: Synthesize answer with a stronger model
    answer = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "Synthesize a comprehensive answer."},
            {"role": "user", "content": f"Topic: {query}\nQueries: {queries}"},
        ],
    )
    return answer.choices[0].message.content

result = research_agent("What is quantum computing?")

That's it. Drishti automatically:

  1. Intercepts both LLM calls
  2. Captures tokens, cost, latency, and full I/O
  3. Renders a rich terminal tree
  4. Exports the trace to .drishti/traces/ as JSON

๐Ÿ“ฆ Installation

pip install drishti-ai              # Core only (no provider SDKs)
pip install drishti-ai[openai]      # + OpenAI SDK
pip install drishti-ai[anthropic]   # + Anthropic SDK
pip install drishti-ai[groq]        # + Groq SDK
pip install drishti-ai[ollama]      # + Ollama SDK
pip install drishti-ai[all]         # All providers

Requirements: Python 3.10+


๐ŸŽฏ Supported Providers

Provider SDK Method Patched Pricing
OpenAI chat.completions.create (sync + async) gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo, o1, o1-mini, o3-mini
Anthropic messages.create (sync + async) claude-3-5-sonnet, claude-3-5-haiku, claude-3-opus, claude-sonnet-4
Groq chat.completions.create (sync + async) llama-3.3-70b, llama-3.1-8b, mixtral-8x7b
Ollama chat() (sync + async) All local models โ€” always $0.00

Provider not installed? Drishti keeps running and prints a one-time actionable warning with the install extra.

Unknown model? Cost defaults to $0.00. Trace still works perfectly.


๐Ÿ–ฅ๏ธ CLI

# Print installed version
drishti version

# List all saved traces
drishti list

# Replay a trace in the terminal
drishti view <file>          # by file path
drishti view <id-prefix>     # by trace ID prefix
drishti view <file> --full   # show full prompt/completion payloads

# Compare two traces
drishti diff <trace-a> <trace-b>

# Aggregate stats
drishti stats

# Export a trace as CSV
drishti export <trace> --format csv

# Replay the same LLM requests and compare deltas
drishti replay <trace>

# Delete all saved traces
drishti clear

Example output of drishti list:

๐Ÿ“‹ Saved Traces

  20260416_153042_research_agent.json  research-agent  success  1203 tokens  $0.0090
  20260416_152801_claude_agent.json    claude-agent    error    0 tokens     $0.0000

โšก Async Support

Drishti auto-detects async functions and Just Worksโ„ข:

from drishti import trace
import openai

client = openai.AsyncOpenAI()

@trace(name="async-agent")
async def async_agent(query: str) -> str:
    response = await client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": query}],
    )
    return response.choices[0].message.content

โš™๏ธ Configuration

Per-call configuration

@trace(
    name="my-agent",      # Custom trace name (default: function name)
    budget_usd=0.05,      # Per-trace budget in USD
    on_exceed="warn",     # "warn" (default) or "abort"
    display=True,          # Print tree to terminal (default: True)
    export=True,           # Save JSON to disk (default: True)
)
def my_agent():
    ...

Config file

Create .drishti/config.toml in your project root:

[drishti]
display = true              # Print trace tree to terminal
export = true               # Save traces to disk
default_export_dir = ".drishti/traces"  # Preferred export dir (traces_dir still works)
budget_usd = 0.10           # Budget threshold
on_exceed = "warn"          # "warn" or "abort"
quiet = false               # Suppress terminal tree output
auto_open_on_error = false  # Auto-open trace output on errors
max_preview_chars = 220     # Prompt/completion preview truncation length
estimate_stream_tokens = true  # Use optional tiktoken estimation for streams

Decorator usage patterns

# Bare decorator โ€” name defaults to function name
@trace
def my_agent():
    ...

# With custom name
@trace(name="research-agent")
def my_agent():
    ...

# Budget guard
@trace(budget_usd=0.05)
def expensive_agent():
    ...

# Hard abort once budget is exceeded mid-run
@trace(budget_usd=0.05, on_exceed="abort")
def budget_guarded_agent():
    ...

๐Ÿ›ก๏ธ Error Handling

Drishti follows one golden rule: never change the behavior of your agent code.

Scenario Drishti Behavior
Provider SDK not installed Skipped silently, no crash
LLM call raises exception Span recorded with status=ERROR, exception re-raised
Token usage missing Defaults to 0, no crash
Unknown model Cost defaults to $0.00
JSON export fails Warning printed, agent continues
Display fails Warning printed, agent continues

๐Ÿ“„ JSON Export Format

Traces are saved to .drishti/traces/ as JSON:

{
  "trace_id": "a1b2c3d4-...",
  "name": "research-agent",
  "started_at": "2026-04-16T15:30:42.123456+00:00",
  "ended_at": "2026-04-16T15:30:42.634567+00:00",
  "status": "success",
  "summary": {
    "total_tokens": 1203,
    "total_cost_usd": 0.009,
    "total_latency_ms": 511.0,
    "span_count": 2
  },
  "spans": [
    {
      "span_id": "...",
      "step": 1,
      "name": "openai/gpt-4o-mini",
      "provider": "openai",
      "model": "gpt-4o-mini",
      "tokens": { "prompt": 45, "completion": 267, "total": 312 },
      "cost_usd": 0.0001,
      "latency_ms": 124.0,
      "status": "success"
    }
  ]
}

๐Ÿงช Development

# Clone and setup
git clone https://github.com/aarambh-darshan/drishti.git
cd drishti
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev,all]"

# Run tests
pytest tests/ -v

# Run with coverage
pytest tests/ --cov=drishti --cov-report=term-missing

# Lint
ruff check drishti/ tests/
ruff format --check drishti/ tests/

๐Ÿ—บ๏ธ Roadmap

Version Focus Status
v0.1.0 Core Foundation โœ… Released
v0.2.2 Developer Experience + Replay + Concurrency + New Providers โœ… Released
v0.3.0 Web Dashboard (drishti serve) ๐Ÿ“‹ Planned
v0.4.0 Smart Features (prompt analysis, cost optimization) ๐Ÿ”ฎ Future
v0.5.0 Framework Integrations (LangChain, LlamaIndex) ๐Ÿ”ฎ Future
v1.0.0 Production-Stable Release ๐Ÿ”ฎ Future

See ROADMAP.md for the full feature plan.


๐Ÿ“ Architecture

See ARCHITECTURE.md for the complete system design, including:

  • System architecture diagram
  • Data flow walkthrough
  • Provider interception strategy
  • Thread safety / async support design
  • Error handling philosophy

๐Ÿค Contributing

Contributions are welcome! See CONTRIBUTING.md for guidelines.


๐Ÿ“„ License

MIT โ€” see LICENSE.


Drishti (เคฆเฅƒเคทเฅเคŸเคฟ) โ€” See what your agent thinks.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

drishti_ai-0.2.2.tar.gz (56.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

drishti_ai-0.2.2-py3-none-any.whl (40.3 kB view details)

Uploaded Python 3

File details

Details for the file drishti_ai-0.2.2.tar.gz.

File metadata

  • Download URL: drishti_ai-0.2.2.tar.gz
  • Upload date:
  • Size: 56.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for drishti_ai-0.2.2.tar.gz
Algorithm Hash digest
SHA256 05a4e51f2be3bc699437c8c5fecfb447cc93e3140aa7d56cdc09b727db3800dd
MD5 16a132bede9f5936ab42f631e40bbc9a
BLAKE2b-256 c1926b37291832bae5a28e2cdf964d0e1d9da37ea6b44c3f755dcd10869a37b4

See more details on using hashes here.

Provenance

The following attestation bundles were made for drishti_ai-0.2.2.tar.gz:

Publisher: publish.yml on aarambh-darshan/drishti

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file drishti_ai-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: drishti_ai-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 40.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for drishti_ai-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 1d9b25810512a5f7514eea5cbce780129ef2eb78b3edd3a58aaf0e3496b53335
MD5 618c593cebf5e1f3fc59c787fdf591f5
BLAKE2b-256 0e9bf9bf77e408a4096b6a515b0ccf92f5326976b52581bc32bda57e398e3b02

See more details on using hashes here.

Provenance

The following attestation bundles were made for drishti_ai-0.2.2-py3-none-any.whl:

Publisher: publish.yml on aarambh-darshan/drishti

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page