A Python SDK for Glass AI with OpenTelemetry tracing support
Project description
Glass AI Python SDK
OpenTelemetry-powered observability for your AI applications
The Glass Python SDK provides seamless OpenTelemetry tracing for AI/LLM applications. Automatically instrument OpenAI, Anthropic (Claude), and Google (Gemini) API calls, track function execution, and gain deep visibility into your AI workflows.
✨ Features
- 🔌 Zero-config instrumentation for OpenAI, Anthropic, and Google Generative AI
- 🎯 Decorator-based tracing with
@tracefor any function - 📊 Interaction tracking with user context, sessions, and metadata
- 🔄 Full async/await support including async generators
- 🛡️ Type-safe with full typing support
- 🐛 Debug mode with console output for local development
📦 Installation
pip install glass-ai
🚀 Quick Start
from glass import init, trace, interaction
from openai import OpenAI
# Initialize Glass with your API key
init(api_key="your-glass-api-key")
# Your OpenAI calls are now automatically traced!
client = OpenAI()
@trace()
def generate_response(prompt: str) -> str:
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
# Track user interactions with metadata
with interaction(user_id="user_123", session_id="sess_abc") as ctx:
result = generate_response("What is the meaning of life?")
ctx.finish(output={"response": result})
📖 API Reference
init()
Initializes the Glass SDK and configures OpenTelemetry tracing.
from glass import init
# Basic initialization
init(api_key="your-api-key")
# With debug mode (logs traces to console)
init(api_key="your-api-key", debug=True)
# Skip default instrumentations if you want full control
init(
api_key="your-api-key",
skip_default_instrumentations=True
)
# With custom instrumentations
from opentelemetry.instrumentation.requests import RequestsInstrumentor
init(
api_key="your-api-key",
instrumentations=[RequestsInstrumentor()],
skip_default_instrumentations=True
)
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key |
str | None |
None |
Your Glass API key. Falls back to GLASS_API_KEY env var. |
instrumentations |
list[Any] | None |
None |
Custom OpenTelemetry instrumentors. |
skip_default_instrumentations |
bool |
False |
Skip auto-instrumenting OpenAI, Anthropic, and Gemini. |
debug |
bool |
False |
Enable console output for debugging. |
@trace()
Decorator that wraps functions with OpenTelemetry tracing. Automatically records function arguments, return values, and exceptions.
from glass import trace
# Basic usage - span name defaults to function name
@trace()
def process_data(data: dict) -> dict:
return {"processed": True, **data}
# Custom span name
@trace(name="custom-operation")
def my_function():
pass
# With custom attributes
@trace(attributes={"operation": "embedding", "model": "text-embedding-3-small"})
def create_embedding(text: str) -> list[float]:
# Your embedding logic
return [0.1, 0.2, 0.3]
Async Support
The @trace decorator works seamlessly with async functions and async generators:
import asyncio
from glass import trace
@trace()
async def async_process(data: str) -> str:
await asyncio.sleep(0.1)
return f"processed: {data}"
@trace()
async def async_stream(items: list[str]):
for item in items:
await asyncio.sleep(0.1)
yield item.upper()
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
name |
str | None |
None |
Custom span name. Defaults to function name. |
attributes |
dict[str, Any] | None |
None |
Additional attributes to attach to the span. |
interaction()
Context manager for tracking user interactions. Sets metadata that propagates to all nested traced functions and can create root spans.
from glass import interaction, trace
@trace()
def call_llm(prompt: str) -> str:
# This span will inherit user_id and session_id from the interaction
return "LLM response"
# Sync usage
with interaction(user_id="user_123", session_id="sess_abc", input="Hello!") as ctx:
result = call_llm("Hello!")
ctx.finish(output={"response": result})
# Async usage
async with interaction(user_id="user_123") as ctx:
result = await async_call_llm("Hello!")
ctx.finish(output={"response": result})
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
user_id |
str | None |
None |
Identifier for the user. |
session_id |
str | None |
None |
Session identifier. |
input |
str | None |
None |
The user's input/query. |
service |
str | None |
None |
Service name for routing. |
**kwargs |
Any |
- | Additional metadata key-value pairs. |
Interaction Methods
| Method | Description |
|---|---|
finish(output) |
Record the final output of the interaction. |
set_attribute(key, value) |
Set a custom attribute on the span. |
record_exception(exception) |
Record an exception with error status. |
task_span()
Context manager for creating task spans with explicit input/output recording. Useful for tracking discrete units of work.
from glass import task_span
# Sync usage
with task_span("embedding-task", attributes={"model": "ada-002"}) as task:
task.record_input({"text": "Hello, world!"})
embedding = compute_embedding("Hello, world!")
task.record_output({"embedding": embedding, "dimensions": 1536})
# Async usage
async with task_span("async-task") as task:
task.record_input({"query": "search term"})
results = await search(query="search term")
task.record_output({"results": results})
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
name |
str |
required | The name of the task span. |
attributes |
dict[str, Any] | None |
None |
Additional attributes for the span. |
TaskSpan Methods
| Method | Description |
|---|---|
record_input(data) |
Record input data for the task. |
record_output(data) |
Record output data for the task. |
set_attribute(key, value) |
Set a custom attribute on the span. |
record_exception(exception) |
Record an exception with error status. |
🤖 Supported AI Providers
Glass automatically instruments the following AI providers out of the box:
| Provider | Package | Auto-Instrumented |
|---|---|---|
| OpenAI | openai |
✅ Yes |
| Anthropic (Claude) | anthropic |
✅ Yes |
| Google Generative AI (Gemini) | google-generativeai |
✅ Yes |
All API calls to these providers are automatically traced with:
- Request/response payloads
- Token usage metrics
- Model information
- Latency measurements
- Error tracking
⚙️ Configuration
Environment Variables
| Variable | Description |
|---|---|
GLASS_API_KEY |
Your Glass API key (alternative to passing in code) |
Example with Environment Variables
export GLASS_API_KEY="your-api-key"
from glass import init
# API key is read from environment
init()
🐛 Debug Mode
Enable debug mode to see traces in your console during development:
from glass import init
init(api_key="your-api-key", debug=True)
This will output span information to stderr, helping you understand the trace structure without needing to check the Glass dashboard.
🔗 Combining Primitives
Glass primitives compose naturally to build comprehensive traces:
from glass import init, trace, interaction, task_span
from openai import OpenAI
init(api_key="your-api-key")
client = OpenAI()
@trace()
def retrieve_context(query: str) -> list[str]:
# Retrieval logic here
return ["context 1", "context 2"]
@trace()
def generate_response(query: str, context: list[str]) -> str:
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": f"Context: {context}"},
{"role": "user", "content": query}
]
)
return response.choices[0].message.content
@trace(name="rag-pipeline")
def rag_query(query: str) -> str:
with task_span("retrieval") as task:
task.record_input({"query": query})
context = retrieve_context(query)
task.record_output({"num_docs": len(context)})
return generate_response(query, context)
# Track the full user interaction
with interaction(user_id="user_123", input="What is quantum computing?") as ctx:
result = rag_query("What is quantum computing?")
ctx.finish(output={"answer": result})
This creates a rich trace hierarchy:
interaction (user_id=user_123)
└── rag-pipeline
├── retrieval (task_span)
│ └── retrieve_context
└── generate_response
└── OpenAI chat.completions.create (auto-instrumented)
📋 Requirements
- Python 3.9+
- OpenTelemetry SDK and API
📄 License
MIT License - see LICENSE for details.
🔗 Links
- 📖 Documentation
- 🏠 Website
Built with ❤️ by Glass
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file glass_ai-0.1.3.tar.gz.
File metadata
- Download URL: glass_ai-0.1.3.tar.gz
- Upload date:
- Size: 21.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b6e5a716d6304d5153fb3313d3dc3aba389d9fbdd4c004e9b74e4f45d670148
|
|
| MD5 |
610400002c76ce58c23a31544513e01a
|
|
| BLAKE2b-256 |
3f837156339228e904125d7bbd65eb5182632877becb0d905f3d64d266a1d35c
|
File details
Details for the file glass_ai-0.1.3-py3-none-any.whl.
File metadata
- Download URL: glass_ai-0.1.3-py3-none-any.whl
- Upload date:
- Size: 24.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cd6382882c6256152bf2bcfc446531be8243ddc0df4b20a0c58229b8fa0d65e6
|
|
| MD5 |
25850b69b35be3e4de1013820fabf4d5
|
|
| BLAKE2b-256 |
a687a001525ef5e97b71e744feba99db63b2ad352d834391001643a0389cf413
|