Raindrop AI (Python SDK)
Project description
Raindrop Python SDK
The official Python SDK for Raindrop AI — track AI events, collect user signals, and instrument LLM applications with OpenTelemetry-based tracing.
Installation
pip install raindrop-ai
Requires Python 3.10+
Quick Start
import raindrop.analytics as raindrop
raindrop.init(api_key="your-api-key", tracing_enabled=True)
# Track an AI event
raindrop.track_ai(
user_id="user-123",
event="chat-completion",
model="gpt-4",
input="What is the weather?",
output="It's sunny and 72°F.",
convo_id="conv-456",
)
Interactions
Use begin() and finish() for multi-step AI workflows:
interaction = raindrop.begin(
user_id="user-123",
event="agent-run",
input="Search for weather data",
convo_id="conv-456",
)
# Update incrementally
interaction.set_property("region", "us-east")
interaction.add_attachments([
raindrop.Attachment(type="code", value="print('hello')", language="python")
])
# Complete the interaction
interaction.finish(output="Found weather data for NYC")
Resuming Interactions
Access the current interaction from nested functions:
@raindrop.tool("sentiment_analyzer")
def analyze_sentiment(text: str):
interaction = raindrop.resume_interaction()
interaction.set_property("sentiment", "positive")
return {"sentiment": "positive"}
Decorators
Instrument functions with automatic span creation:
@raindrop.interaction("my_workflow")
def run_workflow():
...
@raindrop.task("process_data")
def process():
...
@raindrop.tool("search")
def search(query: str):
...
Spans
Context Managers
with raindrop.task_span("process_data"):
result = do_processing()
with raindrop.tool_span("web_search"):
results = search(query)
Manual Spans
For async or distributed operations where you need explicit control:
span = raindrop.start_span(kind="tool", name="async_search")
span.record_input({"query": "weather"})
# ... later, when the result arrives
span.record_output({"result": "sunny"})
span.end()
Retroactive Tool Logging
Log tool calls after they complete, without wrapping them in spans:
interaction = raindrop.begin(user_id="user-123", event="agent-run")
interaction.track_tool(
name="web_search",
input={"query": "weather in NYC"},
output={"results": ["Sunny, 72°F"]},
duration_ms=150,
)
interaction.track_tool(
name="database_query",
input={"query": "SELECT * FROM users"},
duration_ms=50,
error=ConnectionError("Connection timeout"),
)
interaction.finish(output="Done")
Signals
Track user feedback on AI outputs:
# Basic signal
raindrop.track_signal(event_id="evt-123", name="thumbs_up")
# Feedback with comment
raindrop.track_signal(
event_id="evt-123",
name="user_feedback",
signal_type="feedback",
comment="This answer was helpful",
sentiment="POSITIVE",
)
# Edit signal
raindrop.track_signal(
event_id="evt-123",
name="user_edit",
signal_type="edit",
after="The corrected response text",
)
User Identification
raindrop.identify("user-123", traits={"plan": "pro", "company": "Acme"})
PII Redaction
Enable automatic redaction of emails, phone numbers, credit cards, SSNs, and other PII from AI inputs and outputs:
raindrop.set_redact_pii(True)
Auto-Instrumentation
By default, Raindrop auto-instruments detected LLM libraries (OpenAI, Anthropic, Bedrock, etc.) via Traceloop. To disable:
raindrop.init(api_key="your-key", tracing_enabled=True, auto_instrument=False)
Or selectively control which libraries are instrumented:
from raindrop.analytics import Instruments
raindrop.init(
api_key="your-key",
tracing_enabled=True,
instruments={Instruments.OPENAI},
)
Note: When auto-instrumentation is enabled, the SDK automatically suppresses noisy warnings from instrumentors for providers you don't use (e.g. "Error initializing MistralAI instrumentor") and from OTel attribute type validation (e.g. provider SDKs using sentinel types like
Omit). Enableset_debug_logs(True)to see these messages for troubleshooting.
Configuration
| Function | Description |
|---|---|
init(api_key, tracing_enabled=False, auto_instrument=True) |
Initialize the SDK |
set_debug_logs(True) |
Enable debug logging |
set_redact_pii(True) |
Enable PII redaction |
flush() |
Flush buffered events |
shutdown() |
Graceful shutdown (called automatically on exit) |
Environment Variables
| Variable | Description |
|---|---|
TRACELOOP_TRACE_CONTENT |
Enable/disable content capture (default: "true") |
OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT |
Max span attribute value length |
Development
# Install dependencies
pip install poetry
poetry install
# Run tests
poetry run pytest
# Run with coverage
poetry run pytest --cov=raindrop
# Run specific test file
poetry run pytest tests/test_analytics.py -v
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file raindrop_ai-0.0.43.tar.gz.
File metadata
- Download URL: raindrop_ai-0.0.43.tar.gz
- Upload date:
- Size: 57.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0971eb703ff4e0feb6ffc1830381979bab8c602a5b012877623ff6969dafc961
|
|
| MD5 |
bc11ea94d90631e9293caaa6cb9be36d
|
|
| BLAKE2b-256 |
61c0e1a03365b391a1d360801d2863aecbb8bf7271dacc759e24d42e7bd867eb
|
File details
Details for the file raindrop_ai-0.0.43-py3-none-any.whl.
File metadata
- Download URL: raindrop_ai-0.0.43-py3-none-any.whl
- Upload date:
- Size: 57.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3140c253c5239032a2bc01f7a5a40d0cea7d417072aaaaf19ad88637b0264f70
|
|
| MD5 |
ffd617d8e636af516ac4cc11a534c39f
|
|
| BLAKE2b-256 |
722b6ded93a75c1c97f0647e7a7e8055e349b8d445557583f230ccc778689072
|