Skip to main content

AI Observability & Cost Intelligence — track token costs, latency, and hallucination risk

Project description

nirixa

AI Observability & Cost Intelligence — track token costs, latency, and hallucination risk for every LLM call, with zero friction.

pip install nirixa

Quick Start

from nirixa import NirixaClient
import openai

nirixa = NirixaClient(api_key="nirixa-your-key")

# Wrap any LLM call — response is completely unchanged
response = nirixa.track(
    feature="/api/chat",
    fn=lambda: openai.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Hello!"}]
    )
)

print(response.choices[0].message.content)

Three Ways to Integrate

1. wrap() — Transparent client proxy (recommended)

Wrap a provider client once and use it exactly like the original. Model, provider, and prompt are auto-extracted from every call — no duplication.

from nirixa import NirixaClient
from openai import OpenAI

nirixa = NirixaClient(api_key="nirixa-your-key")
openai  = OpenAI()

ai = nirixa.wrap(openai, feature="/api/chat", user=user_id)

# Use ai exactly like openai — tracking is automatic
response = ai.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)

Works with any provider:

import anthropic

claude = nirixa.wrap(anthropic.Anthropic(), feature="/api/analyze")
response = claude.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Summarize this..."}]
)

2. track() — Explicit per-call wrapping

prompt = "Summarize this document..."
response = nirixa.track(
    feature="/api/summarize",
    user="user-123",
    prompt=prompt,   # optional: improves hallucination scoring
    fn=lambda: openai.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": prompt}]
    )
)

model and provider are auto-detected from the response — no need to pass them.

3. Auto-patch — Zero code changes

Patch provider SDKs globally at app startup. Every call is tracked without touching existing code.

from nirixa import NirixaClient
from nirixa.middleware import patch_openai, patch_all

nirixa = NirixaClient(api_key="nirixa-your-key")

# Patch a specific provider
patch_openai(nirixa, feature="/api/chat")

# Or patch every installed provider at once
patch_all(nirixa)
# [nirixa] Patched 4 providers: OpenAI, Anthropic, Groq, Gemini

Module-level API

Skip NirixaClient() and use the module-level singleton:

import nirixa

nirixa.init(api_key="nirixa-your-key")

# track
response = nirixa.track(
    feature="/api/chat",
    fn=lambda: openai.chat.completions.create(...)
)

# wrap
ai = nirixa.wrap(openai_client, feature="/api/chat")

# flush before script exit
nirixa.get_client().flush()

Supported Providers

Provider Auto-detected via Patch function
OpenAI choices + usage patch_openai
Anthropic content + usage patch_anthropic
Groq OpenAI-compatible shape patch_groq
Google Gemini usage_metadata patch_gemini
Mistral OpenAI-compatible shape patch_mistral
Together AI OpenAI-compatible shape patch_together
Ollama prompt_eval_count patch_ollama
AWS Bedrock ResponseMetadata

Configuration

nirixa = NirixaClient(
    api_key="nirixa-your-key",          # Required
    host="https://api.nirixa.in",       # Default
    score_hallucinations=True,  # Hallucination risk scoring (LOW/MEDIUM/HIGH)
    async_ingest=True,          # Non-blocking — zero added latency
    debug=False,                # Log each tracked call to console
)

What Gets Tracked

Metric Description
Token cost Per-call USD cost by feature and model
Latency p50 / p95 / p99 response times
Hallucination risk LOW / MEDIUM / HIGH heuristic scoring
Prompt drift Output variance over time
Error rate Failed calls by feature

flush() — Before script exit

In scripts or short-lived processes, call flush() to ensure all async ingests complete:

nirixa = NirixaClient(api_key="nirixa-your-key")
# ... your code ...
nirixa.flush()

Install with provider extras

pip install "nirixa[openai]"
pip install "nirixa[anthropic]"
pip install "nirixa[gemini]"
pip install "nirixa[all]"   # installs every supported provider

Links


निरीक्षा — Observe everything.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nirixa-2.0.0.tar.gz (13.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nirixa-2.0.0-py3-none-any.whl (12.9 kB view details)

Uploaded Python 3

File details

Details for the file nirixa-2.0.0.tar.gz.

File metadata

  • Download URL: nirixa-2.0.0.tar.gz
  • Upload date:
  • Size: 13.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for nirixa-2.0.0.tar.gz
Algorithm Hash digest
SHA256 c8f2d64a1cd68f5baacb6e7276779947dee98f252a64bd1c256ad4fbe763e35b
MD5 34d0916300c206de36714464ac42b624
BLAKE2b-256 5e09481e217348982ba0b80052a236f35e6318d73667b4da80b16163b284759b

See more details on using hashes here.

File details

Details for the file nirixa-2.0.0-py3-none-any.whl.

File metadata

  • Download URL: nirixa-2.0.0-py3-none-any.whl
  • Upload date:
  • Size: 12.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for nirixa-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b39e7ece35aed12018fdf2ee495547756a6cc062350c5d4a47d4fdaed19b1ed1
MD5 3826b6045252bb511c1391b8ded4b984
BLAKE2b-256 d2c352ff8d5f05c0545b3eb8e5175d9b3a80d64d8f8396a643b5ecf4384f1d64

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page