Respan integration for Haystack pipelines with tracing and logging support
Project description
Keywords AI Haystack Integration
Monitor and optimize your Haystack pipelines with Keywords AI's LLM observability platform.
Features
Gateway Mode
Route LLM calls through Keywords AI gateway:
- Automatic logging (zero config)
- Model fallbacks & retries
- Load balancing
- Cost optimization
- Rate limiting & caching
Tracing Mode
Capture full workflow execution:
- Multi-component pipelines
- Parent-child span relationships
- Timing per component
- Input/output tracking
- RAG + Agent workflows
Combined Mode (Recommended)
Use both together for:
- Gateway reliability + Tracing visibility
- Production-ready monitoring
Installation
pip install keywordsai-exporter-haystack
Quick Start
1. Get API Keys
- Keywords AI API Key
- OpenAI API Key (for examples)
2. Set Environment Variables
export KEYWORDSAI_API_KEY="your-keywords-ai-key"
export OPENAI_API_KEY="your-openai-key"
export HAYSTACK_CONTENT_TRACING_ENABLED="true" # For tracing mode
Usage Examples
Gateway Mode (Auto-Logging)
Just replace OpenAIGenerator with KeywordsAIGenerator:
import os
from haystack import Pipeline
from haystack.components.builders import PromptBuilder
from keywordsai_exporter_haystack import KeywordsAIGenerator
# Create pipeline
pipeline = Pipeline()
pipeline.add_component("prompt", PromptBuilder(template="Tell me about {{topic}}."))
pipeline.add_component("llm", KeywordsAIGenerator(
model="gpt-4o-mini",
api_key=os.getenv("KEYWORDSAI_API_KEY")
))
pipeline.connect("prompt", "llm")
# Run
result = pipeline.run({"prompt": {"topic": "machine learning"}})
print(result["llm"]["replies"][0])
That's it! All LLM calls are automatically logged to Keywords AI with no additional code.
See: examples/gateway_example.py
Prompt Management
Use platform-managed prompts for centralized control:
import os
from haystack import Pipeline
from keywordsai_exporter_haystack import KeywordsAIGenerator
# Create prompt on platform: https://platform.keywordsai.co/platform/prompts
# Get your prompt_id from the platform
# Create pipeline with platform prompt (model config comes from platform)
pipeline = Pipeline()
pipeline.add_component("llm", KeywordsAIGenerator(
prompt_id="1210b368ce2f4e5599d307bc591d9b7a", # Your prompt ID
api_key=os.getenv("KEYWORDSAI_API_KEY")
))
# Run with prompt variables
result = pipeline.run({
"llm": {
"prompt_variables": {
"user_input": "The cat sat on the mat"
}
}
})
print("Response received successfully!")
print(f"Model: {result['llm']['meta'][0]['model']}")
print(f"Tokens: {result['llm']['meta'][0]['usage']['total_tokens']}")
Benefits:
- Update prompts without code changes
- Model config managed on platform (no hardcoding)
- Version control & rollback
- A/B testing
- Team collaboration
See: examples/prompt_example.py
Tracing Mode (Workflow Monitoring)
Add KeywordsAIConnector to capture the entire pipeline:
import os
from haystack import Pipeline
from haystack.components.builders import PromptBuilder
from haystack.components.generators import OpenAIGenerator
from keywordsai_exporter_haystack import KeywordsAIConnector
os.environ["HAYSTACK_CONTENT_TRACING_ENABLED"] = "true"
# Create pipeline with tracing
pipeline = Pipeline()
pipeline.add_component("tracer", KeywordsAIConnector("My Workflow"))
pipeline.add_component("prompt", PromptBuilder(template="Tell me about {{topic}}."))
pipeline.add_component("llm", OpenAIGenerator(model="gpt-4o-mini"))
pipeline.connect("prompt", "llm")
# Run
result = pipeline.run({"prompt": {"topic": "artificial intelligence"}})
print(result["llm"]["replies"][0])
print(f"\nTrace URL: {result['tracer']['trace_url']}")
Dashboard shows:
- Pipeline (root span)
- PromptBuilder (template processing)
- LLM (generation with tokens + cost)
See: examples/tracing_example.py
Combined Mode (Recommended for Production)
Use BOTH gateway + prompt + tracing for the full stack:
import os
from haystack import Pipeline
from keywordsai_exporter_haystack import KeywordsAIConnector, KeywordsAIGenerator
os.environ["HAYSTACK_CONTENT_TRACING_ENABLED"] = "true"
# Create pipeline with gateway, prompt management, and tracing
pipeline = Pipeline()
pipeline.add_component("tracer", KeywordsAIConnector("Full Stack: Gateway + Prompt + Tracing"))
pipeline.add_component("llm", KeywordsAIGenerator(
prompt_id="1210b368ce2f4e5599d307bc591d9b7a", # Platform-managed prompt
api_key=os.getenv("KEYWORDSAI_API_KEY")
))
# Run with prompt variables
result = pipeline.run({
"llm": {
"prompt_variables": {
"user_input": "She sells seashells by the seashore"
}
}
})
print("Response received successfully!")
print(f"Trace URL: {result['tracer']['trace_url']}")
You get:
- Gateway routing with fallbacks, cost tracking, and reliability
- Platform prompts managed centrally (no hardcoded prompts/models)
- Full workflow trace with all components and timing
See: examples/combined_example.py
What Gets Logged
Gateway Mode
- Model used
- Prompt & completion
- Tokens & cost
- Latency
- Request metadata
Tracing Mode
Each span includes:
- Component name & type
- Input data
- Output data
- Timing (latency)
- Parent-child relationships
For LLM spans, additionally:
- Model name
- Token counts
- Calculated cost (auto-computed)
View Your Data
All logs and traces appear in your Keywords AI dashboard:
Dashboard: https://platform.keywordsai.co/logs
- Logs view: Individual LLM calls
- Traces view: Full pipeline workflows with tree visualization
API Reference
KeywordsAIGenerator
Gateway component for LLM calls.
KeywordsAIGenerator(
model: Optional[str] = None, # Model name (e.g., "gpt-4o-mini") - optional if using prompt_id
api_key: Optional[str] = None, # Keywords AI API key (defaults to KEYWORDSAI_API_KEY env var)
base_url: Optional[str] = None, # API base URL (defaults to https://api.keywordsai.co)
prompt_id: Optional[str] = None, # Platform prompt ID for prompt management
generation_kwargs: Optional[Dict] = None
)
Replaces: OpenAIGenerator with gateway routing
Note: When using prompt_id, model config comes from the platform - no need to specify model
KeywordsAIConnector
Tracing component for workflow monitoring.
KeywordsAIConnector(
name: str, # Pipeline name for dashboard
api_key: Optional[str] = None, # Keywords AI API key (defaults to KEYWORDSAI_API_KEY env var)
base_url: Optional[str] = None, # API base URL (defaults to https://api.keywordsai.co)
metadata: Optional[Dict] = None # Custom metadata for all spans
)
Returns: {"name": str, "trace_url": str}
Requires: HAYSTACK_CONTENT_TRACING_ENABLED=true environment variable
Examples
Run the examples:
# Set environment variables
export KEYWORDSAI_API_KEY="your-key"
export OPENAI_API_KEY="your-openai-key"
export HAYSTACK_CONTENT_TRACING_ENABLED="true"
# Gateway mode (auto-logging)
python examples/gateway_example.py
# Tracing mode (workflow monitoring)
python examples/tracing_example.py
# Prompt management (platform prompts)
python examples/prompt_example.py
# Combined mode (gateway + prompt + tracing)
python examples/combined_example.py
Requirements
- Python 3.9+
haystack-ai >= 2.0.0requests >= 2.31.0
Support
- Documentation: https://docs.keywordsai.co/
- Dashboard: https://platform.keywordsai.co/
- Issues: GitHub Issues
License
MIT License - see LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file respan_exporter_haystack-1.0.1.tar.gz.
File metadata
- Download URL: respan_exporter_haystack-1.0.1.tar.gz
- Upload date:
- Size: 14.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.12.8 Darwin/25.2.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
30ef462426042cffdefeac5fc9e76168096520d28aa6d8ab9d6d4446d0a04af6
|
|
| MD5 |
0ad89f04305193953b2c981a87bfba17
|
|
| BLAKE2b-256 |
4f3059cfcd6a87ab28153f973303562d19666155ba44e9ac3c1ba47c08dde1b6
|
File details
Details for the file respan_exporter_haystack-1.0.1-py3-none-any.whl.
File metadata
- Download URL: respan_exporter_haystack-1.0.1-py3-none-any.whl
- Upload date:
- Size: 15.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.12.8 Darwin/25.2.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ae572c0d201ee7688f5e9a12b2ce35a966c90400f255bb7680d45e5aad82805d
|
|
| MD5 |
caf095dbeeb35e479f8f877b02f12b7e
|
|
| BLAKE2b-256 |
4c3642923d0a0f20b5d80a3676bbe29d49b554b0f760fdfc576cad1d34bfbaf0
|