AI Observability & Cost Intelligence — track token costs, latency, and hallucination risk
Project description
nirixa
AI Observability & Cost Intelligence — track token costs, latency, and hallucination risk for every LLM call, with zero friction.
pip install nirixa
Quick Start
from nirixa import NirixaClient
import openai
nirixa = NirixaClient(api_key="nirixa-your-key")
# Wrap any LLM call — response is completely unchanged
response = nirixa.track(
feature="/api/chat",
fn=lambda: openai.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
)
print(response.choices[0].message.content)
Three Ways to Integrate
1. wrap() — Transparent client proxy (recommended)
Wrap a provider client once and use it exactly like the original. Model, provider, and prompt are auto-extracted from every call — no duplication.
from nirixa import NirixaClient
from openai import OpenAI
nirixa = NirixaClient(api_key="nirixa-your-key")
openai = OpenAI()
ai = nirixa.wrap(openai, feature="/api/chat", user=user_id)
# Use ai exactly like openai — tracking is automatic
response = ai.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
Works with any provider:
import anthropic
claude = nirixa.wrap(anthropic.Anthropic(), feature="/api/analyze")
response = claude.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{"role": "user", "content": "Summarize this..."}]
)
2. track() — Explicit per-call wrapping
prompt = "Summarize this document..."
response = nirixa.track(
feature="/api/summarize",
user="user-123",
prompt=prompt, # optional: improves hallucination scoring
fn=lambda: openai.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
)
model and provider are auto-detected from the response — no need to pass them.
3. Auto-patch — Zero code changes
Patch provider SDKs globally at app startup. Every call is tracked without touching existing code.
from nirixa import NirixaClient
from nirixa.middleware import patch_openai, patch_all
nirixa = NirixaClient(api_key="nirixa-your-key")
# Patch a specific provider
patch_openai(nirixa, feature="/api/chat")
# Or patch every installed provider at once
patch_all(nirixa)
# [nirixa] Patched 4 providers: OpenAI, Anthropic, Groq, Gemini
Module-level API
Skip NirixaClient() and use the module-level singleton:
import nirixa
nirixa.init(api_key="nirixa-your-key")
# track
response = nirixa.track(
feature="/api/chat",
fn=lambda: openai.chat.completions.create(...)
)
# wrap
ai = nirixa.wrap(openai_client, feature="/api/chat")
# flush before script exit
nirixa.get_client().flush()
Supported Providers
| Provider | Auto-detected via | Patch function |
|---|---|---|
| OpenAI | choices + usage |
patch_openai |
| Anthropic | content + usage |
patch_anthropic |
| Groq | OpenAI-compatible shape | patch_groq |
| Google Gemini | usage_metadata |
patch_gemini |
| Mistral | OpenAI-compatible shape | patch_mistral |
| Together AI | OpenAI-compatible shape | patch_together |
| Ollama | prompt_eval_count |
patch_ollama |
| AWS Bedrock | ResponseMetadata |
— |
Configuration
nirixa = NirixaClient(
api_key="nirixa-your-key", # Required
host="https://api.nirixa.in", # Default
score_hallucinations=True, # Hallucination risk scoring (LOW/MEDIUM/HIGH)
async_ingest=True, # Non-blocking — zero added latency
debug=False, # Log each tracked call to console
)
What Gets Tracked
| Metric | Description |
|---|---|
| Token cost | Per-call USD cost by feature and model |
| Latency | p50 / p95 / p99 response times |
| Hallucination risk | LOW / MEDIUM / HIGH heuristic scoring |
| Prompt drift | Output variance over time |
| Error rate | Failed calls by feature |
flush() — Before script exit
In scripts or short-lived processes, call flush() to ensure all async ingests complete:
nirixa = NirixaClient(api_key="nirixa-your-key")
# ... your code ...
nirixa.flush()
Install with provider extras
pip install "nirixa[openai]"
pip install "nirixa[anthropic]"
pip install "nirixa[gemini]"
pip install "nirixa[all]" # installs every supported provider
Links
- Dashboard: nirixa.in
- JS/TS SDK:
npm install nirixa - Docs: nirixa.in/docs
- Email: nirixaai@gmail.com
निरीक्षा — Observe everything.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file nirixa-1.0.6.tar.gz.
File metadata
- Download URL: nirixa-1.0.6.tar.gz
- Upload date:
- Size: 12.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b0764052c605e3317e2f3b4d31efb427a7a724f472aea8729734feff713e152b
|
|
| MD5 |
f495181613ccb8486fe584b4fa6c6bd1
|
|
| BLAKE2b-256 |
7853886920e9646660fcf59d37806e9feac4934df7761c6559292b3b58eee989
|
File details
Details for the file nirixa-1.0.6-py3-none-any.whl.
File metadata
- Download URL: nirixa-1.0.6-py3-none-any.whl
- Upload date:
- Size: 12.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c79cac73ae5019f2554d7bb72322953bdb175ea60aecd4cdf6008219a486cae1
|
|
| MD5 |
6e227f69a8a8f258d7647504d3da96d9
|
|
| BLAKE2b-256 |
37b3561f4a7ceba360d71b048a818082fac1ce4499ace530146333d9b6601cd6
|