Auto-instrumentation SDK for monitoring and tracking LLM usage (OpenAI, Anthropic, etc.)
Project description
Olakai Python SDK
Automatic instrumentation for LLM monitoring and tracking - Monitor your AI applications with zero code changes.
What's New in v1.0.0 🎉
First stable release! The Olakai Python SDK is now production-ready with a stable API for auto-instrumentation of LLM providers.
- ✅ Production stable - v1.0.0 marks the first stable release
- ✅ Simplified payload - Unified
customDatafield replacescustomDimensionsandcustomMetrics - ✅ New
olakai_event()function - Manually send event reports when needed - ✅ Streamlined session management -
chatIdremoved from context; sessions managed internally viasessionId - ✅ Auto-instrument OpenAI - One line to monitor all OpenAI calls
- ✅ Zero code changes - Works with existing OpenAI code
Quick Start (30 seconds)
Installation
pip install olakai-sdk
pip install openai # Install OpenAI SDK separately
Basic Usage
from olakaisdk import olakai_config, instrument_openai
from openai import OpenAI
# 1. Configure Olakai (one-time setup)
olakai_config("your-olakai-api-key")
# 2. Auto-instrument OpenAI
instrument_openai()
# 3. Use OpenAI normally - monitoring happens automatically!
client = OpenAI(api_key="your-openai-key")
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
# That's it! Your call is now tracked with:
# - Token counts (input/output)
# - Model name
# - API key (for cost tracking)
# - Latency
# - Request/response content
Check your Olakai dashboard to see the tracked data!
Features
Automatic Tracking
After calling instrument_openai(), the SDK automatically captures:
- Token usage - Prompt tokens, completion tokens, total tokens
- Cost tracking - API key identification for backend cost calculation
- Model information - Which model was used (gpt-4, gpt-3.5-turbo, etc.)
- Latency - Request duration in milliseconds
- Content - Prompts and responses (configurable)
- Errors - Automatic error tracking with context
Context-Based Metadata
Add user and task metadata using context managers:
from olakaisdk import olakai_context
with olakai_context(
userEmail="user@example.com",
userId="user-123",
task="Customer Support"
):
# All OpenAI calls within this context include the metadata
response = client.chat.completions.create(...)
Note: Session tracking is handled automatically via an internal sessionId.
Streaming Support
Works seamlessly with OpenAI's streaming API:
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True # Streaming is automatically handled!
)
for chunk in response:
print(chunk.choices[0].delta.content, end="")
# Telemetry is sent after stream completes
Installation Options
# Basic installation
pip install olakai-sdk
# With OpenAI support
pip install olakai-sdk[openai]
# For development
pip install olakai-sdk[dev]
Requirements: Python 3.7+
Usage Examples
Minimal Example
from olakaisdk import olakai_config, instrument_openai
from openai import OpenAI
olakai_config("olakai-api-key")
instrument_openai()
client = OpenAI(api_key="openai-key")
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
With User Context
from olakaisdk import olakai_config, instrument_openai, olakai_context
from openai import OpenAI
olakai_config("olakai-api-key")
instrument_openai()
client = OpenAI(api_key="openai-key")
# Add user metadata
with olakai_context(
userEmail="customer@example.com",
userId="customer-456",
task="Customer Support",
subTask="password-reset"
):
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "user", "content": "How do I reset my password?"}
]
)
print(response.choices[0].message.content)
With Custom Data
with olakai_context(
userEmail="user@example.com",
task="Content Generation",
customData={
"environment": "production",
"region": "us-east-1",
"user_tier": "premium",
"user_id": 12345,
"session_length": 45.5,
"is_premium": True
}
):
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Write a blog post"}]
)
Nested Contexts
Contexts can be nested, with inner contexts overriding outer values:
# Outer context applies to all calls
with olakai_context(task="Customer Service", userEmail="support@example.com"):
# Inner context overrides specific fields
with olakai_context(subTask="billing-inquiry"):
response = client.chat.completions.create(...)
# Has task="Customer Service", subTask="billing-inquiry"
# Back to outer context
with olakai_context(subTask="technical-support"):
response = client.chat.completions.create(...)
# Has task="Customer Service", subTask="technical-support"
Async Support
Works with async OpenAI calls:
import asyncio
from openai import AsyncOpenAI
async def main():
olakai_config("olakai-api-key")
instrument_openai()
client = AsyncOpenAI(api_key="openai-key")
with olakai_context(userEmail="user@example.com"):
response = await client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello async world!"}]
)
print(response.choices[0].message.content)
asyncio.run(main())
Configuration
Initialize the SDK
from olakaisdk import olakai_config
# Basic configuration
olakai_config("your-api-key")
# With custom endpoint
olakai_config("your-api-key", endpoint="https://custom.olakai.ai")
# With debug logging
olakai_config("your-api-key", debug=True)
Instrumentation Options
from olakaisdk import instrument_openai
# Default: capture everything
instrument_openai()
# Customize what to capture
instrument_openai(
capture_inputs=True, # Capture prompts/messages
capture_outputs=True, # Capture responses
capture_api_keys=True # Track API keys for cost analysis
)
Privacy Controls
Disable input/output capture for sensitive data:
instrument_openai(
capture_inputs=False, # Don't send prompts
capture_outputs=False, # Don't send responses
capture_api_keys=True # Still track tokens and costs
)
API Reference
Primary API (v1.0.0)
olakai_config(api_key, endpoint="https://app.olakai.ai", debug=False)
Initialize the Olakai SDK. Must be called before instrumentation.
Parameters:
api_key(str): Your Olakai API keyendpoint(str, optional): API endpoint URLdebug(bool, optional): Enable debug logging
instrument_openai(capture_inputs=True, capture_outputs=True, capture_api_keys=True)
Auto-instrument OpenAI SDK for monitoring.
Parameters:
capture_inputs(bool): Capture prompt/messagescapture_outputs(bool): Capture responsescapture_api_keys(bool): Track API keys for cost analysis
Raises:
RuntimeError: If SDK not configured witholakai_config()ImportError: If OpenAI SDK not installed
olakai_context(**metadata)
Context manager to add metadata to LLM calls.
Parameters:
userEmail(str, optional): User email for trackinguserId(str, optional): User ID for explicit user trackingtask(str, optional): High-level task categorysubTask(str, optional): Specific subtaskcustomData(dict, optional): Custom metadata (string, int, float, or bool values)
Note: Session tracking is handled automatically via an internal sessionId.
Example:
with olakai_context(userEmail="user@example.com", userId="user-123", task="Support"):
# Your OpenAI calls here
pass
uninstrument_openai()
Remove OpenAI instrumentation. Restores original OpenAI behavior.
is_instrumented()
Check if OpenAI is currently instrumented.
Returns: bool
olakai_event(params)
Send manual report of AI interaction.
Parameters:
params(OlakaiEventParams)
Where OlakaiEventParams has the fields:
prompt(str): Interaction promptresponse(str): Interaction responseuserEmail(str, optional): User email for trackinguserId(str, optional): User ID for explicit user trackingtask(str, optional): High-level task categorysubTask(str, optional): Specific subtaskcustomData(dict, optional): Custom metadata (string, int, float, or bool values)shouldScore(bool, optional): Whether scoring should be applied to the datatokens(int, optional): Number of tokens usedrequestTime(int, optional): Time in milliseconds of the interaction
Example:
olakai_event(OlakaiEventParams(
prompt="Test prompt",
response="Test response",
userEmail="test@example.com",
userId="user-123",
task="test-task"
))
Legacy API (Deprecated)
The v0.4.0 decorator-based API is still available but deprecated. Use the primary API above instead:
@olakai_monitor()- Manual decorator (useinstrument_openai()instead)@olakai_supervisor()- Alias forolakai_monitor()(deprecated)olakai()- Low-level API (useolakai_event()instead)
How It Works
Under the Hood
- Monkey Patching:
instrument_openai()wraps OpenAI'schat.completions.createmethods - Data Extraction: Automatically extracts tokens, model, latency from responses
- Context Merging: Combines context metadata with extracted data
- Async Telemetry: Sends data to Olakai API without blocking your code
- Error Handling: Captures errors without affecting your application
Data Flow
Your Code → OpenAI API → Response
↓ ↓
Olakai Context Extract Telemetry
↓ ↓
└──→ Merge & Send to Olakai API (async)
Migration from v0.4.0
Old Way (v0.4.0)
from olakaisdk import olakai_config, olakai_monitor
from openai import OpenAI
olakai_config("api-key")
@olakai_monitor(
userEmail="user@example.com",
task="Support",
customData={"model": "gpt-4"}
)
def get_response(prompt):
client = OpenAI(api_key="openai-key")
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
result = get_response("Hello")
New Way (v0.5.0)
from olakaisdk import olakai_config, instrument_openai, olakai_context
from openai import OpenAI
olakai_config("api-key")
instrument_openai() # ← One-time setup
client = OpenAI(api_key="openai-key")
def get_response(prompt):
# No decorator needed!
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
# Add metadata with context when needed
with olakai_context(userEmail="user@example.com", task="Support"):
result = get_response("Hello")
Key Improvements:
- ✅ No decorators needed
- ✅ Model name automatically captured
- ✅ Tokens automatically captured
- ✅ Works with existing OpenAI code
- ✅ Cleaner, more maintainable code
Dashboard & Analytics
After setting up monitoring, visit your Olakai dashboard to see:
- Usage Analytics - API calls, tokens, trends over time
- Cost Tracking - Per-API-key usage for ROI analysis
- User Insights - Individual user behavior patterns
- Task Performance - Monitor different tasks and success rates
- Model Comparison - Compare performance across models
- Custom Data - Visualize your custom metadata
Best Practices
Do This ✅
- Initialize once: Call
olakai_config()at app startup - Instrument early: Call
instrument_openai()before creating clients - Use contexts: Add metadata with
olakai_context()for rich analytics - Track users: Always include
userEmailwhen possible - Organize tasks: Use consistent
taskandsubTasknames - Custom data: Track environment, region, features with
customData
Avoid This ❌
- Don't skip configuration: Always call
olakai_config()first - Don't log secrets: Never include passwords in prompts/responses
- Don't instrument twice: Check
is_instrumented()before re-instrumenting - Don't use decorators: The old
@olakai_monitor()API is deprecated
Security Tips
- Store API keys in environment variables
- Use
capture_inputs=False/capture_outputs=Falsefor sensitive data - Review dashboard access controls
- Consider GDPR/privacy requirements for user tracking
Troubleshooting
SDK not initialized error
RuntimeError: Olakai SDK not initialized. Call olakai_config() first.
Solution: Call olakai_config() before instrument_openai().
OpenAI not installed error
ImportError: OpenAI SDK not installed. Install with: pip install openai
Solution: pip install openai
No data in dashboard
Possible causes:
- Check API key is correct
- Enable debug mode:
olakai_config("key", debug=True) - Verify network connectivity
- Check instrumentation:
is_instrumented()should returnTrue
Streaming not working
Make sure you're iterating through the entire stream:
response = client.chat.completions.create(..., stream=True)
# ✅ Correct - iterate fully
for chunk in response:
print(chunk.choices[0].delta.content)
# Telemetry sent after loop completes
# ❌ Wrong - don't break early
for chunk in response:
if some_condition:
break # Telemetry won't be sent!
Examples
See USAGE.md for more detailed examples and use cases.
Try the sample script:
python examples/basic_example.py
Development
Setup
git clone https://github.com/olakai/olakai-sdk-python
cd olakai-sdk-python
pip install -e ".[dev]"
Run Tests
pytest
pytest tests/test_openai_instrumentation.py -v
Code Quality
./tests/check.sh
Support & Community
- Documentation: Olakai Docs
- Support: support@olakai.ai
- Issues: GitHub Issues
- Changelog: CHANGELOG.md
License
MIT © Olakai
What's Next?
- 🚀 Anthropic instrumentation (Claude support)
- 🚀 Google AI instrumentation (Gemini support)
- 🚀 Local model support (Ollama, LM Studio)
- 🚀 Enhanced streaming analytics
- 🚀 Cost optimization recommendations
Ready to monitor your AI application?
pip install olakai-sdk openai
from olakaisdk import olakai_config, instrument_openai
olakai_config("your-api-key")
instrument_openai()
# Start building! 🚀
Happy monitoring!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file olakai_sdk-1.1.0.tar.gz.
File metadata
- Download URL: olakai_sdk-1.1.0.tar.gz
- Upload date:
- Size: 24.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
69e1f42ab9b000890660c970d0bf0d75c19fce40d899d2292c2688d0da7270ae
|
|
| MD5 |
2bb9cb47a59aaa13b06771022c17195a
|
|
| BLAKE2b-256 |
7447c7852374ff37f557f87a1fff60ad49577f1dedbfe9f63f78aebcc9693df6
|
File details
Details for the file olakai_sdk-1.1.0-py3-none-any.whl.
File metadata
- Download URL: olakai_sdk-1.1.0-py3-none-any.whl
- Upload date:
- Size: 35.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c96fb3e0ca5a3ffb0a220b70204ad52e98b3cea454adf89090396768347369d3
|
|
| MD5 |
d009921d19ffdd7d4c3eff0bba587164
|
|
| BLAKE2b-256 |
ce096a060152c8bc46f1b92dd72a05f55465c1fe40d61a133079dceaef6f92fc
|