Skip to main content

Prysm AI — Observability SDK for LLM applications. One-line integration for OpenAI, Anthropic, and any OpenAI-compatible provider.

Project description

Prysm AI — Python SDK

Observability for LLM applications. One line of code.

Prysm wraps your existing OpenAI client and routes every request through the Prysm proxy, capturing latency, token usage, cost, errors, and full request/response payloads — with zero changes to your application logic.

PyPI version Python 3.9+ License: MIT


Installation

pip install prysm

Quick Start

import openai
from prysm import monitor

# Your existing OpenAI client
client = openai.OpenAI(api_key="sk-...")

# Wrap it with Prysm — that's it
monitored = monitor(client, prysm_key="sk-prysm-...")

# Every call is now tracked
response = monitored.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Explain quantum computing"}],
)

print(response.choices[0].message.content)

Your dashboard at app.prysmai.io now shows the request with full metrics: latency, tokens, cost, model, and the complete request/response.


How It Works

The SDK creates a new OpenAI client that points at the Prysm proxy instead of the OpenAI API directly. The proxy:

  1. Authenticates the request using your sk-prysm-* API key
  2. Forwards the request to OpenAI (or any configured provider) using your project's stored credentials
  3. Captures the full request, response, timing, token counts, and cost
  4. Returns the response to your application — unchanged

Your application code stays exactly the same. The only difference is the client instance.

Your App  →  Prysm Proxy  →  OpenAI API
              ↓
         Metrics stored
         (latency, tokens,
          cost, errors)

API Reference

monitor(client, prysm_key, base_url, timeout)

The primary entry point. Wraps an existing OpenAI client.

Parameter Type Default Description
client openai.OpenAI or openai.AsyncOpenAI required Your existing OpenAI client
prysm_key str PRYSM_API_KEY env var Your Prysm API key (sk-prysm-...)
base_url str https://proxy.prysmai.io/v1 Prysm proxy URL
timeout float 120.0 Request timeout in seconds

Returns: A new OpenAI client of the same type (sync or async) routed through Prysm.

# Sync
monitored = monitor(openai.OpenAI(api_key="sk-..."), prysm_key="sk-prysm-...")

# Async
monitored = monitor(openai.AsyncOpenAI(api_key="sk-..."), prysm_key="sk-prysm-...")

PrysmClient(prysm_key, base_url, timeout)

Lower-level client for more control.

from prysm import PrysmClient

prysm = PrysmClient(prysm_key="sk-prysm-...")

# Create sync client
client = prysm.openai()

# Create async client
async_client = prysm.async_openai()

prysm_context — Request Metadata

Attach metadata (user ID, session ID, custom tags) to every request for filtering and grouping in your dashboard.

from prysm import prysm_context

# Set globally
prysm_context.set(user_id="user_123", session_id="sess_abc")

# Or use scoped context
with prysm_context(user_id="user_456", metadata={"env": "production"}):
    response = monitored.chat.completions.create(...)
    # This request is tagged with user_456

# Outside the block, context reverts to user_123
Method Description
prysm_context.set(user_id, session_id, metadata) Set global context for all subsequent requests
prysm_context.get() Get the current context object
prysm_context.clear() Reset context to defaults
prysm_context(user_id, session_id, metadata) Use as a context manager for scoped metadata

Environment Variables

The SDK reads these environment variables as fallbacks:

Variable Description
PRYSM_API_KEY Your Prysm API key (used if prysm_key is not passed)
PRYSM_BASE_URL Custom proxy URL (used if base_url is not passed)
export PRYSM_API_KEY="sk-prysm-your-key-here"
from prysm import monitor
import openai

# No need to pass prysm_key — reads from env
monitored = monitor(openai.OpenAI(api_key="sk-..."))

Streaming

Streaming works exactly as you'd expect — no changes needed:

stream = monitored.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Write a haiku about AI"}],
    stream=True,
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

The proxy captures Time to First Token (TTFT), total latency, and full streamed content.


Async Support

Full async support with the same API:

import asyncio
import openai
from prysm import monitor

async def main():
    client = openai.AsyncOpenAI(api_key="sk-...")
    monitored = monitor(client, prysm_key="sk-prysm-...")

    response = await monitored.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Hello async!"}],
    )
    print(response.choices[0].message.content)

asyncio.run(main())

Self-Hosted Proxy

If you're running the Prysm proxy on your own infrastructure:

monitored = monitor(
    client,
    prysm_key="sk-prysm-...",
    base_url="http://localhost:3000/v1",  # Your self-hosted proxy
)

What Gets Captured

Every request through the SDK is logged with:

Metric Description
Model Which model was called (gpt-4o, gpt-4o-mini, etc.)
Latency Total request duration in milliseconds
TTFT Time to first token (streaming requests)
Prompt tokens Input token count
Completion tokens Output token count
Cost Calculated cost based on model pricing
Status Success, error, or timeout
Request body Full messages array and parameters
Response body Complete model response
User ID From prysm_context (if set)
Session ID From prysm_context (if set)
Custom metadata Any key-value pairs from prysm_context

Error Handling

The SDK preserves OpenAI's error types. If the upstream API returns an error, you get the same exception you'd get without Prysm:

try:
    response = monitored.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "test"}],
    )
except openai.AuthenticationError:
    print("Invalid API key")
except openai.RateLimitError:
    print("Rate limited")
except openai.APIError as e:
    print(f"API error: {e}")

Prysm-specific errors (invalid Prysm key, proxy unreachable) also surface as standard OpenAI exceptions so your existing error handling works unchanged.


Requirements

  • Python 3.9+
  • openai >= 1.0.0
  • httpx >= 0.24.0

Development

git clone https://github.com/osasisorae/prysmai.git
cd prysmai/sdk

# Install with dev dependencies
pip install -e ".[dev]"

# Run tests
pytest tests/ -v

Test Coverage

The SDK includes 41 tests covering:

  • Client initialization and validation
  • Environment variable fallbacks
  • Sync and async client creation
  • monitor() function behavior
  • Context management (global, scoped, nested)
  • Header injection via custom transports
  • Full integration tests with mock HTTP server
  • Error propagation (401, 500)

License

MIT — see LICENSE for details.


Built by Prysm AI — See inside your AI.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

prysmai-0.1.0.tar.gz (10.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

prysmai-0.1.0-py3-none-any.whl (8.8 kB view details)

Uploaded Python 3

File details

Details for the file prysmai-0.1.0.tar.gz.

File metadata

  • Download URL: prysmai-0.1.0.tar.gz
  • Upload date:
  • Size: 10.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0rc1

File hashes

Hashes for prysmai-0.1.0.tar.gz
Algorithm Hash digest
SHA256 e0f9e3dfbee24144bafba1a0dcaf41abb8303622f9415b0885e14bfc573ccdd4
MD5 10f53589fe42353fb844d1071d033c25
BLAKE2b-256 ab5178f5c8bd19921b2883abdd85c956b6af15787f0b082beeccaac8bace2b9b

See more details on using hashes here.

File details

Details for the file prysmai-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: prysmai-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 8.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0rc1

File hashes

Hashes for prysmai-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d50a331a3fc1451cf1a29f8a71ae4894c3acadd3a2e3351f72212adafc83fe60
MD5 51bcf634df69edba8221698e59f507c7
BLAKE2b-256 e5e731406a18f145dff42c0e92c89db81a898c0101d89ab556626aa3eb68d6c2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page