See what your agent thinks โ AI agent execution tracer
Project description
๐ Drishti (เคฆเฅเคทเฅเคเคฟ)
See what your agent thinks.
Drishti automatically captures, visualizes, and exports traces of AI agent execution. Add one decorator โ see every LLM call with tokens, cost, and latency. Zero code changes to your agent logic.
from drishti import trace
@trace(name="my-agent")
def run_agent(query):
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": query}],
)
return response.choices[0].message.content
๐ Drishti Trace โ my-agent
โโโ โ
[1] openai/gpt-4o-mini 312 tokens $0.0001 124ms
โโโ โ
[2] openai/gpt-4o 891 tokens $0.0089 387ms
โญโโโโโโโโโโโโ Summary โโโโโโโโโโโโโฎ
โ Total Tokens 1203 โ
โ Total Cost $0.0090 USD โ
โ Wall Time 511ms โ
โ LLM Calls 2 โ
โ Status SUCCESS โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โจ Features
- ๐ Zero-config auto-detection โ OpenAI, Anthropic, Groq, Ollama intercepted automatically
- ๐ณ Rich terminal tree โ every LLM call with tokens, cost, and latency at a glance
- ๐พ JSON export โ full traces saved to
.drishti/traces/for sharing, diffing, and replaying - ๐ฅ๏ธ CLI tool โ
drishti version,drishti list,drishti view,drishti diff,drishti stats,drishti export,drishti replay,drishti clear - ๐ฐ Cost tracking โ real-time pricing for 15+ models across 4 providers
- ๐ก๏ธ Budget guard โ warn when cost exceeds a threshold
- โก Async support โ works with
async deffunctions out of the box - ๐ Thread-safe โ correct isolation for concurrent agents via thread-local + ContextVar
- ๐ชถ Zero overhead โ pure passthrough when no
@tracecontext is active
๐ Quickstart
Install
pip install drishti-ai[openai] # OpenAI support
# or
pip install drishti-ai[anthropic] # Anthropic support
# or
pip install drishti-ai[all] # All providers
Trace Your Agent
from drishti import trace
import openai
client = openai.OpenAI()
@trace(name="research-agent")
def research_agent(query: str) -> str:
# Step 1: Generate search queries
plan = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "Generate 3 search queries for the topic."},
{"role": "user", "content": query},
],
)
queries = plan.choices[0].message.content
# Step 2: Synthesize answer with a stronger model
answer = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "Synthesize a comprehensive answer."},
{"role": "user", "content": f"Topic: {query}\nQueries: {queries}"},
],
)
return answer.choices[0].message.content
result = research_agent("What is quantum computing?")
That's it. Drishti automatically:
- Intercepts both LLM calls
- Captures tokens, cost, latency, and full I/O
- Renders a rich terminal tree
- Exports the trace to
.drishti/traces/as JSON
๐ฆ Installation
pip install drishti-ai # Core only (no provider SDKs)
pip install drishti-ai[openai] # + OpenAI SDK
pip install drishti-ai[anthropic] # + Anthropic SDK
pip install drishti-ai[groq] # + Groq SDK
pip install drishti-ai[ollama] # + Ollama SDK
pip install drishti-ai[all] # All providers
Requirements: Python 3.10+
๐ฏ Supported Providers
| Provider | SDK Method Patched | Pricing |
|---|---|---|
| OpenAI | chat.completions.create (sync + async) |
gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo, o1, o1-mini, o3-mini |
| Anthropic | messages.create (sync + async) |
claude-3-5-sonnet, claude-3-5-haiku, claude-3-opus, claude-sonnet-4 |
| Groq | chat.completions.create (sync + async) |
llama-3.3-70b, llama-3.1-8b, mixtral-8x7b |
| Ollama | chat() (sync + async) |
All local models โ always $0.00 |
Provider not installed? Drishti keeps running and prints a one-time actionable warning with the install extra.
Unknown model? Cost defaults to $0.00. Trace still works perfectly.
๐ฅ๏ธ CLI
# Print installed version
drishti version
# List all saved traces
drishti list
# Replay a trace in the terminal
drishti view <file> # by file path
drishti view <id-prefix> # by trace ID prefix
drishti view <file> --full # show full prompt/completion payloads
# Compare two traces
drishti diff <trace-a> <trace-b>
# Aggregate stats
drishti stats
# Export a trace as CSV
drishti export <trace> --format csv
# Replay the same LLM requests and compare deltas
drishti replay <trace>
# Delete all saved traces
drishti clear
Example output of drishti list:
๐ Saved Traces
20260416_153042_research_agent.json research-agent success 1203 tokens $0.0090
20260416_152801_claude_agent.json claude-agent error 0 tokens $0.0000
โก Async Support
Drishti auto-detects async functions and Just Worksโข:
from drishti import trace
import openai
client = openai.AsyncOpenAI()
@trace(name="async-agent")
async def async_agent(query: str) -> str:
response = await client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": query}],
)
return response.choices[0].message.content
โ๏ธ Configuration
Per-call configuration
@trace(
name="my-agent", # Custom trace name (default: function name)
budget_usd=0.05, # Per-trace budget in USD
on_exceed="warn", # "warn" (default) or "abort"
display=True, # Print tree to terminal (default: True)
export=True, # Save JSON to disk (default: True)
)
def my_agent():
...
Config file
Create .drishti/config.toml in your project root:
[drishti]
display = true # Print trace tree to terminal
export = true # Save traces to disk
default_export_dir = ".drishti/traces" # Preferred export dir (traces_dir still works)
budget_usd = 0.10 # Budget threshold
on_exceed = "warn" # "warn" or "abort"
quiet = false # Suppress terminal tree output
auto_open_on_error = false # Auto-open trace output on errors
max_preview_chars = 220 # Prompt/completion preview truncation length
estimate_stream_tokens = true # Use optional tiktoken estimation for streams
Decorator usage patterns
# Bare decorator โ name defaults to function name
@trace
def my_agent():
...
# With custom name
@trace(name="research-agent")
def my_agent():
...
# Budget guard
@trace(budget_usd=0.05)
def expensive_agent():
...
# Hard abort once budget is exceeded mid-run
@trace(budget_usd=0.05, on_exceed="abort")
def budget_guarded_agent():
...
๐ก๏ธ Error Handling
Drishti follows one golden rule: never change the behavior of your agent code.
| Scenario | Drishti Behavior |
|---|---|
| Provider SDK not installed | Skipped silently, no crash |
| LLM call raises exception | Span recorded with status=ERROR, exception re-raised |
| Token usage missing | Defaults to 0, no crash |
| Unknown model | Cost defaults to $0.00 |
| JSON export fails | Warning printed, agent continues |
| Display fails | Warning printed, agent continues |
๐ JSON Export Format
Traces are saved to .drishti/traces/ as JSON:
{
"trace_id": "a1b2c3d4-...",
"name": "research-agent",
"started_at": "2026-04-16T15:30:42.123456+00:00",
"ended_at": "2026-04-16T15:30:42.634567+00:00",
"status": "success",
"summary": {
"total_tokens": 1203,
"total_cost_usd": 0.009,
"total_latency_ms": 511.0,
"span_count": 2
},
"spans": [
{
"span_id": "...",
"step": 1,
"name": "openai/gpt-4o-mini",
"provider": "openai",
"model": "gpt-4o-mini",
"tokens": { "prompt": 45, "completion": 267, "total": 312 },
"cost_usd": 0.0001,
"latency_ms": 124.0,
"status": "success"
}
]
}
๐งช Development
# Clone and setup
git clone https://github.com/aarambh-darshan/drishti.git
cd drishti
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev,all]"
# Run tests
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=drishti --cov-report=term-missing
# Lint
ruff check drishti/ tests/
ruff format --check drishti/ tests/
๐บ๏ธ Roadmap
| Version | Focus | Status |
|---|---|---|
| v0.1.0 | Core Foundation | โ Released |
| v0.2.2 | Developer Experience + Replay + Concurrency + New Providers | โ Released |
| v0.3.0 | Web Dashboard (drishti serve) |
๐ Planned |
| v0.4.0 | Smart Features (prompt analysis, cost optimization) | ๐ฎ Future |
| v0.5.0 | Framework Integrations (LangChain, LlamaIndex) | ๐ฎ Future |
| v1.0.0 | Production-Stable Release | ๐ฎ Future |
See ROADMAP.md for the full feature plan.
๐ Architecture
See ARCHITECTURE.md for the complete system design, including:
- System architecture diagram
- Data flow walkthrough
- Provider interception strategy
- Thread safety / async support design
- Error handling philosophy
๐ค Contributing
Contributions are welcome! See CONTRIBUTING.md for guidelines.
๐ License
MIT โ see LICENSE.
Drishti (เคฆเฅเคทเฅเคเคฟ) โ See what your agent thinks.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file drishti_ai-0.2.2.tar.gz.
File metadata
- Download URL: drishti_ai-0.2.2.tar.gz
- Upload date:
- Size: 56.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
05a4e51f2be3bc699437c8c5fecfb447cc93e3140aa7d56cdc09b727db3800dd
|
|
| MD5 |
16a132bede9f5936ab42f631e40bbc9a
|
|
| BLAKE2b-256 |
c1926b37291832bae5a28e2cdf964d0e1d9da37ea6b44c3f755dcd10869a37b4
|
Provenance
The following attestation bundles were made for drishti_ai-0.2.2.tar.gz:
Publisher:
publish.yml on aarambh-darshan/drishti
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
drishti_ai-0.2.2.tar.gz -
Subject digest:
05a4e51f2be3bc699437c8c5fecfb447cc93e3140aa7d56cdc09b727db3800dd - Sigstore transparency entry: 1316977490
- Sigstore integration time:
-
Permalink:
aarambh-darshan/drishti@d8b5189faaff2b80c0c488884d6adcfa2e6dda4d -
Branch / Tag:
refs/tags/v0.2.2 - Owner: https://github.com/aarambh-darshan
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d8b5189faaff2b80c0c488884d6adcfa2e6dda4d -
Trigger Event:
release
-
Statement type:
File details
Details for the file drishti_ai-0.2.2-py3-none-any.whl.
File metadata
- Download URL: drishti_ai-0.2.2-py3-none-any.whl
- Upload date:
- Size: 40.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1d9b25810512a5f7514eea5cbce780129ef2eb78b3edd3a58aaf0e3496b53335
|
|
| MD5 |
618c593cebf5e1f3fc59c787fdf591f5
|
|
| BLAKE2b-256 |
0e9bf9bf77e408a4096b6a515b0ccf92f5326976b52581bc32bda57e398e3b02
|
Provenance
The following attestation bundles were made for drishti_ai-0.2.2-py3-none-any.whl:
Publisher:
publish.yml on aarambh-darshan/drishti
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
drishti_ai-0.2.2-py3-none-any.whl -
Subject digest:
1d9b25810512a5f7514eea5cbce780129ef2eb78b3edd3a58aaf0e3496b53335 - Sigstore transparency entry: 1316977499
- Sigstore integration time:
-
Permalink:
aarambh-darshan/drishti@d8b5189faaff2b80c0c488884d6adcfa2e6dda4d -
Branch / Tag:
refs/tags/v0.2.2 - Owner: https://github.com/aarambh-darshan
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d8b5189faaff2b80c0c488884d6adcfa2e6dda4d -
Trigger Event:
release
-
Statement type: