Skip to main content

SwisperStudio SDK - Zero-config tracing for LangGraph applications

Project description

SwisperStudio SDK

Simple, high-performance integration for tracing Swisper LangGraph applications.

v0.6.3 - HITL Trace Continuity:

  • ๐Ÿ”„ HITL continuation - Same trace for resumed graphs after interrupt
  • ๐Ÿท๏ธ HITL markers - Observations marked with is_hitl_continuation
  • ๐Ÿ”— Thread correlation - thread_id โ†’ trace_id mapping in Redis

v0.5.4 - Full LLM Telemetry:

  • ๐Ÿš€ 50x faster - 500ms โ†’ 10ms overhead (Redis Streams)
  • ๐Ÿง  LLM reasoning - See thinking process (<think>...</think>)
  • ๐Ÿ“Š Full LLM telemetry - Prompts, responses, token usage
  • ๐Ÿ“ก Connection status - Heartbeat-based health monitoring
  • โš™๏ธ Per-node config - Fine-grained control

Installation

From PyPI (Recommended)

pip install swisper-studio-sdk>=0.5.4

That's it! No authentication needed.

From Source (Development)

git clone https://github.com/Fintama/swisper_studio.git
cd swisper_studio/sdk
pip install -e .

Note: Source installation requires Fintama GitHub organization access.

Quick Start (30 seconds)

1. Initialize at Startup (Redis Streams)

# In your main.py or startup code
from swisper_studio_sdk import safe_initialize, wrap_llm_adapter

# Async initialization (in lifespan or startup) - RECOMMENDED
status = await safe_initialize(
    redis_url="redis://swisper_studio_redis:6379",  # SwisperStudio Redis
    project_id="your-project-id",                    # From SwisperStudio
    enabled=True
)

if status["initialized"]:
    logger.info("SwisperStudio tracing enabled")
    
    # CRITICAL: Wrap LLM adapters to capture prompts/responses/tokens
    wrap_llm_adapter()
else:
    logger.info("Tracing disabled (Swisper continues normally)")

Note: safe_initialize() NEVER blocks or raises exceptions. If Redis is unavailable, Swisper continues normally with tracing disabled.

โš ๏ธ IMPORTANT: You MUST call wrap_llm_adapter() after successful initialization to capture LLM telemetry (prompts, responses, token usage). Without this, LLM-calling nodes will show as SPAN instead of GENERATION.

2. ONE LINE CHANGE to Enable Graph Tracing

# Before:
from langgraph.graph import StateGraph
graph = StateGraph(GlobalSupervisorState)

# After (change ONE line):
from swisper_studio_sdk import create_traced_graph
graph = create_traced_graph(GlobalSupervisorState, trace_name="supervisor")

# All nodes added to this graph are automatically traced!

3. Add Nodes as Normal

# Add nodes - they're automatically traced!
graph.add_node("intent_classification", intent_classification_node)
graph.add_node("memory", memory_node)
graph.add_node("planner", planner_node)
graph.add_node("ui_node", ui_node)

# Compile and run as usual
app = graph.compile()
result = await app.ainvoke(initial_state)

# All executions are now traced to SwisperStudio! ๐ŸŽ‰

Two-Part Setup Explained

SwisperStudio tracing requires TWO components:

Component Purpose What it captures
create_traced_graph() Wraps graph nodes Execution flow, state, timing
wrap_llm_adapter() Wraps LLM adapters Prompts, responses, tokens

Both are required for full observability:

# โœ… CORRECT - Full observability
await safe_initialize(...)
wrap_llm_adapter()  # Captures LLM telemetry
graph = create_traced_graph(...)  # Captures node execution

# โŒ WRONG - Missing LLM telemetry
await safe_initialize(...)
# wrap_llm_adapter() NOT called!
graph = create_traced_graph(...)  # Nodes show as SPAN, not GENERATION

Features

Core Features:

  • โœ… One-line integration - create_traced_graph() instead of StateGraph()
  • โœ… Auto-instrumentation - All nodes automatically traced
  • โœ… State capture - Captures input/output state at each node
  • โœ… Error tracking - Captures exceptions and error messages
  • โœ… Nested observations - Supports parent-child relationships
  • โœ… Zero boilerplate - No decorators needed on individual nodes

v0.5.x Features:

  • โœ… Redis Streams - 50x faster than HTTP (500ms โ†’ 10ms)
  • โœ… LLM Reasoning - Captures <think>...</think> tags from DeepSeek R1, o1, etc.
  • โœ… Streaming Support - Captures full responses from streaming LLM calls
  • โœ… Connection Status - Verifies SwisperStudio consumer is running
  • โœ… Per-Node Config - Enable/disable reasoning per node
  • โœ… Memory Safety - Auto-cleanup prevents memory leaks
  • โœ… Full LLM Telemetry (v0.5.4) - Wraps LLMAdapterFactory directly for reliable capture

Advanced Usage

LLM Reasoning Capture

Control reasoning capture per node:

from swisper_studio_sdk import traced

# Enable reasoning with custom length limit
@traced("classify_intent", capture_reasoning=True, reasoning_max_length=20000)
async def classify_intent_node(state):
    # Captures <think>...</think> tags (up to 20KB)
    return state

# Disable reasoning for specific nodes
@traced("memory_node", capture_reasoning=False)
async def memory_node(state):
    # No reasoning captured (faster, less data)
    return state

# Use defaults (reasoning enabled, 50KB limit)
@traced("global_planner")
async def global_planner_node(state):
    return state

What gets captured:

  • โœ… LLM prompts (system + user messages)
  • โœ… Reasoning process (<think>...</think> tags)
  • โœ… Final responses (structured output or streaming)
  • โœ… Token usage (prompt + completion)

Supported models:

  • DeepSeek R1 (with reasoning)
  • OpenAI o1/o3 (with reasoning)
  • GPT-4, Claude, Llama (no reasoning, just prompts + responses)

Manual Tracing (Optional)

For fine-grained control, use @traced decorator:

from swisper_studio_sdk import traced

# Full control over observation
@traced(
    name="intent_classification",
    observation_type="GENERATION",
    capture_reasoning=True,
    reasoning_max_length=10000
)
async def intent_classification_node(state):
    return state

Observation Types

  • AUTO - Auto-detect based on LLM data (default, recommended)
  • SPAN - Generic execution span
  • GENERATION - LLM generation
  • EVENT - Point-in-time event
  • TOOL - Tool call
  • AGENT - Agent execution

Architecture

Redis Streams (v0.4.0)

Your App (Swisper)         Redis Stream              SwisperStudio
       โ”‚                        โ”‚                           โ”‚
  @traced decorator             โ”‚                           โ”‚
       โ”‚                        โ”‚                           โ”‚
  XADD event (1-2ms) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ†’ โ”‚                           โ”‚
       โ”‚                        โ”‚                           โ”‚
  Return immediately            โ”‚                           โ”‚
  (zero latency!)               โ”‚                           โ”‚
                                โ”‚   Consumer reads batch    โ”‚
                                โ”‚ โ†โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
                                โ”‚                           โ”‚
                                โ”‚   Store in PostgreSQL     โ”‚
                                โ”‚ โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ†’

Benefits:

  • 50x faster than HTTP (500ms โ†’ 10ms overhead)
  • No race conditions (ordered stream delivery)
  • Reliable (persistent queue, automatic retry)
  • Scalable (100k+ events/sec)

How It Works

  1. create_traced_graph() monkey-patches add_node() to auto-wrap functions
  2. @traced decorator publishes events to Redis Streams (1-2ms)
  3. SwisperStudio consumer reads from stream and stores in database
  4. Zero user-facing latency (fire-and-forget pattern)

Configuration

Required Settings

# In your config.py or .env
SWISPER_STUDIO_REDIS_URL: str = "redis://redis:6379"
SWISPER_STUDIO_PROJECT_ID: str = "your-project-id"
SWISPER_STUDIO_STREAM_NAME: str = "observability:events"

Optional Settings

# Reasoning capture
SWISPER_STUDIO_CAPTURE_REASONING: bool = True
SWISPER_STUDIO_REASONING_MAX_LENGTH: int = 50000  # 50 KB

# Connection verification
SWISPER_STUDIO_VERIFY_CONSUMER: bool = True  # Check consumer health

Requirements

  • Python 3.11+
  • LangGraph >= 1.0.0, < 2.0.0
  • langgraph-checkpoint >= 2.1.0 (v0.5.2 removed upper bound for HITL compatibility)
  • httpx >= 0.25.2
  • redis >= 5.0.0

Migration

From v0.5.x to v0.5.4

No code changes required. Just update the SDK:

pip install swisper-studio-sdk==0.5.4

From v0.4.x or earlier

  1. Update SDK: pip install swisper-studio-sdk>=0.5.4
  2. Add wrap_llm_adapter() after safe_initialize():
status = await safe_initialize(...)
if status["initialized"]:
    wrap_llm_adapter()  # ADD THIS LINE

From v0.3.x

See SDK_MIGRATION_v0.3.4_to_v0.4.0.md

Migration time: ~5 minutes
Breaking changes: None (backward compatible)

Troubleshooting

LLM nodes showing as SPAN instead of GENERATION

Symptom: LLM-calling nodes (classify_intent, global_planner, etc.) show as SPAN with no token data.

Cause: wrap_llm_adapter() was not called after initialization.

Fix:

status = await safe_initialize(...)
if status["initialized"]:
    wrap_llm_adapter()  # Must be called!

Verify in logs:

โœ… LLMAdapterFactory wrapped for LLM telemetry capture
โœ… LLM adapter wrapped for prompt capture (2 adapter(s))

Traces not appearing in SwisperStudio

  1. Check Redis connectivity in initialization logs
  2. Verify project_id matches your SwisperStudio project
  3. Ensure SwisperStudio backend consumer is running

Token counts showing as 0

This is normal for streaming responses where the LLM doesn't return usage data. Use get_structured_output() for accurate token counts.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

swisper_studio_sdk-0.7.5.tar.gz (66.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

swisper_studio_sdk-0.7.5-py3-none-any.whl (51.7 kB view details)

Uploaded Python 3

File details

Details for the file swisper_studio_sdk-0.7.5.tar.gz.

File metadata

  • Download URL: swisper_studio_sdk-0.7.5.tar.gz
  • Upload date:
  • Size: 66.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for swisper_studio_sdk-0.7.5.tar.gz
Algorithm Hash digest
SHA256 7e259cfaa2eef9f34e932f5588a88062d3d79d50b5e8725b59a79897d4eff4bb
MD5 57fa5f6c70fbe0c6a11628830a0d8a51
BLAKE2b-256 b8eedcf77eae845d78ad80a95b56f624196f7160a3db603d6397aefd35e7ff78

See more details on using hashes here.

File details

Details for the file swisper_studio_sdk-0.7.5-py3-none-any.whl.

File metadata

File hashes

Hashes for swisper_studio_sdk-0.7.5-py3-none-any.whl
Algorithm Hash digest
SHA256 d73e5e32dbc178a8015b592d9540b2045b33fc09b1e1cc6e9fa7a50c7fcd125a
MD5 c8040e5ef1feb0a3d0a03f571032b5a0
BLAKE2b-256 35d5703abf9f7cc5c3e74f72e048cf34004b463c3aa885cfc5b0feb9c69144e5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page