Observability and reliability platform for agentic AI systems
Project description
Anjor
AI agents fail silently. A tool times out, a schema drifts, the context window fills up — and you find out from a user complaint, not a dashboard.
Anjor fixes that. It intercepts your agent's HTTP traffic at the protocol layer and gives you full visibility into every LLM call and tool use — latency, token usage, context window growth, schema drift, prompt changes — without changing a single line of your agent code. Beyond passive logging, it surfaces actionable intelligence: failure pattern clustering, token optimization suggestions, and per-tool quality grades (A–F).
One-line install. No cloud. No account required.
Install
pip install anjor
Quickstart
1. Start the collector and dashboard (one command, one port):
anjor start
# Anjor collector http://localhost:7843/health
# Anjor dashboard http://localhost:7843/ui/
# Database anjor.db
2. Add one line to your agent:
import anjor
anjor.patch() # that's it — httpx is now instrumented
import anthropic
client = anthropic.Anthropic()
# make tool calls as normal — they're captured automatically
Open http://localhost:7843/ui/ in your browser to see the dashboard.
3. Query the API directly:
curl http://localhost:7843/health
curl http://localhost:7843/tools
curl http://localhost:7843/intelligence/failures
curl http://localhost:7843/intelligence/quality/tools
No API key? Use respx to replay a mock response — see the quickstart guide.
What it captures
| Signal | Phase | Details |
|---|---|---|
| Tool calls | 1 | Name, status (success/failure), failure type |
| Schema fingerprints | 1 | SHA-256 structural hash of tool input/output shape |
| Schema drift | 1 | Field-level diff against the baseline for each tool |
| Latency | 1 | Per-call and aggregated (p50/p95/p99) |
| LLM calls | 2 | Model, latency, finish reason — Anthropic, OpenAI, and Gemini |
| Token usage | 2 | Input + output + cache_read tokens per call |
| Context window | 2 | Tokens used vs model limit, utilisation %, per-trace growth rate |
| Context hogs | 2 | Per-tool average output size, % of context consumed |
| System prompt drift | 2 | SHA-256 per agent — alerts when prompt changes between calls |
| Trace context | 1–2 | Trace ID, session ID, agent ID — consistent across LLM + tool events |
| Failure patterns | 3 | Clustered failure analysis with natural-language descriptions and fix suggestions |
| Token optimization | 3 | Tools consuming >5% of context window, estimated token waste and cost savings |
| Quality scores | 3 | Per-tool reliability/schema-stability/latency-consistency grade (A–F) |
| Run quality | 3 | Per-trace context efficiency, failure recovery, tool diversity grade (A–F) |
| Multi-agent spans | 4 | W3C-compatible parent/child span linking across agent boundaries |
| Trace graphs | 4 | DAG reconstruction with topological order and cycle detection |
| Cross-agent attribution | 4 | Token usage and failure rate broken down per agent in a trace |
| Provider breakdown | 5 | LLM dashboard shows Anthropic / OpenAI / Google per model |
Configuration
Via environment variables:
ANJOR_DB_PATH=./my_project.db python my_agent.py
ANJOR_BATCH_SIZE=1 ANJOR_BATCH_INTERVAL_MS=100 python my_agent.py
ANJOR_LOG_LEVEL=DEBUG python my_agent.py
Via .anjor.toml in your project root:
db_path = "my_project.db"
batch_size = 10
batch_interval_ms = 200
log_level = "DEBUG"
Via code:
import anjor
from anjor.core.config import AnjorConfig
anjor.patch(config=AnjorConfig(db_path="my_project.db", batch_size=1))
Supported providers
| Provider | SDK | Intercepted endpoint |
|---|---|---|
| Anthropic | anthropic |
api.anthropic.com/v1/messages |
| OpenAI | openai |
api.openai.com/v1/chat/completions |
| Google Gemini | google-generativeai |
generativelanguage.googleapis.com/.../generateContent |
All three providers are auto-detected — no config required. Anjor reads the URL and routes to the right parser.
What is NOT in v0.5
requestslibrary not intercepted (all three SDKs use httpx by default)- No cloud sync, authentication, or team management
- Intelligence suggestions are heuristic — no LLM-powered explanations yet
- Streaming responses are not parsed (only non-streaming calls are captured)
Releasing a new version
Tag the commit and push — the publish workflow runs CI first, then uploads to PyPI automatically:
git tag v0.5.0
git push origin v0.5.0
Development
No Node/npm required — the dashboard is bundled static HTML served by the collector.
git clone https://github.com/anjor-labs/anjor.git
cd anjor
pip install -e ".[dev]"
pytest --cov=anjor --cov-fail-under=95 -q # ≥95% coverage enforced
ruff check anjor/ tests/ # zero lint errors
mypy anjor/ # strict type checking
anjor start # collector + dashboard on :7843
See CONTRIBUTING.md for full guidelines.
Documentation
Contributing & Contact
- Bug reports / feature requests — open an issue
- Questions / ideas — start a discussion
License
MIT © Anjor Labs
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file anjor-0.5.0.tar.gz.
File metadata
- Download URL: anjor-0.5.0.tar.gz
- Upload date:
- Size: 67.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
68e8b0d8fb3b05bbf1b13c5186d1f0d5362819a719c46cc2bdf511d4c5a9c879
|
|
| MD5 |
7a989486ae8b3b20f4106518cd8da2b7
|
|
| BLAKE2b-256 |
96dba7bb5e07f7e3e29c290e68e7fbbb26004b726ca8950f6061721a1b70396b
|
Provenance
The following attestation bundles were made for anjor-0.5.0.tar.gz:
Publisher:
publish.yml on anjor-labs/anjor
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
anjor-0.5.0.tar.gz -
Subject digest:
68e8b0d8fb3b05bbf1b13c5186d1f0d5362819a719c46cc2bdf511d4c5a9c879 - Sigstore transparency entry: 1280897764
- Sigstore integration time:
-
Permalink:
anjor-labs/anjor@31264b9d9e1a880f5515e02fe891f51a5befd71e -
Branch / Tag:
refs/tags/v0.5.0 - Owner: https://github.com/anjor-labs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@31264b9d9e1a880f5515e02fe891f51a5befd71e -
Trigger Event:
push
-
Statement type:
File details
Details for the file anjor-0.5.0-py3-none-any.whl.
File metadata
- Download URL: anjor-0.5.0-py3-none-any.whl
- Upload date:
- Size: 95.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
edd9299fbb580eaf04e4382e8bef99259bb655ca7c3fc24d14b28bb5dc834722
|
|
| MD5 |
16077ea0f33483266807a935f43f613b
|
|
| BLAKE2b-256 |
eb0c522b3d79c96c633a87462c51edb61c0ce2f163a3188b169f9b29692f81a9
|
Provenance
The following attestation bundles were made for anjor-0.5.0-py3-none-any.whl:
Publisher:
publish.yml on anjor-labs/anjor
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
anjor-0.5.0-py3-none-any.whl -
Subject digest:
edd9299fbb580eaf04e4382e8bef99259bb655ca7c3fc24d14b28bb5dc834722 - Sigstore transparency entry: 1280897777
- Sigstore integration time:
-
Permalink:
anjor-labs/anjor@31264b9d9e1a880f5515e02fe891f51a5befd71e -
Branch / Tag:
refs/tags/v0.5.0 - Owner: https://github.com/anjor-labs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@31264b9d9e1a880f5515e02fe891f51a5befd71e -
Trigger Event:
push
-
Statement type: