Automatically capture and send LLM trace data from GraphBit applications to your observability API endpoint
Project description
BitPulse
Production-grade LLM observability for GraphBit workflows.
- Zero-config automatic tracing for LLM calls and workflows
- Captures prompts, responses, tokens, latency, errors
- Sends trace data to your observability API endpoint
Project Info
- Python: 3.10–3.13
- License: GraphBit Framework License
- PyPI: bitpulse
- PRs: Welcome via GitHub pull requests
Install
pip install bitpulse
Python: 3.10–3.13
What It Does
BitPulse automatically captures detailed trace data from GraphBit LLM clients and workflows and submits it to your observability endpoint for monitoring, analytics, and debugging.
Key Features
- Automatic tracing for LLM calls and workflows
- Rich metadata: prompts, responses, tokens, latency, finish reasons, errors
- Tool-call detection for agent workflows
- Works with OpenAI and GraphBit internals
- Type-safe models (Pydantic) and robust async I/O
Quick Start
import asyncio, os
from graphbit import LlmClient, LlmConfig
from bitpulse import AutoTracer
async def main():
tracer = await AutoTracer.create()
cfg = LlmConfig.openai(api_key=os.getenv("OPENAI_API_KEY"), model="gpt-4o-mini")
client = tracer.wrap_client(LlmClient(cfg), cfg)
resp = await client.complete_full_async("What is GraphBit?", max_tokens=100)
results = await tracer.send()
print("sent:", results["sent"], "failed:", results["failed"])
asyncio.run(main())
Configuration
Set environment variables to configure external endpoints:
export BITPULSE_TRACING_API_KEY="your-api-key"
export BITPULSE_TRACEABLE_PROJECT="your-project-name"
# optional
export BITPULSE_TRACING_API_URL="https://your-api-endpoint.com/traces"
Links
- GitHub: InfinitiBit/bitpulse
- Issues: issue tracker
License
GraphBit Framework License. See LICENSE.md in the repository.
Trace Data Format Example
Example JSON payload sent to your observability endpoint:
{
"tracing_api_key": "your-api-key",
"traceable_project_name": "your-project",
"run_name": "LlmClient",
"run_type": "llm",
"status": "success",
"input": "Hello, world!",
"output": "Hi there!",
"error": null,
"start_time": "2025-01-01T00:00:00Z",
"latency": 123.45,
"tokens": 50,
"metadata": {
"model_name": "gpt-4o-mini",
"provider": "openai",
"input_tokens": 10,
"output_tokens": 40,
"finish_reason": "stop"
}
}
API Reference
High-level AutoTracer methods:
AutoTracer.create(): Initialize tracerwrap_client(llm_client, llm_config): Trace LLM client callswrap_executor(executor, llm_config): Trace workflow executionsend(): Convert and submit captured spans to your endpointexport(): Export captured traces locally
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bitpulse-0.1.2.tar.gz.
File metadata
- Download URL: bitpulse-0.1.2.tar.gz
- Upload date:
- Size: 48.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4eaf2fbfa951a61b33ff88d064aba7b8eeb1d4c5cea8f7707fe60dcd15feb25c
|
|
| MD5 |
374d1793b349a578b984a2bd2194fe72
|
|
| BLAKE2b-256 |
7bc833e5c9f45cc47bc330f1cb86cd7cdad0d0962bd10a775b5882abaa0401d1
|
File details
Details for the file bitpulse-0.1.2-py3-none-any.whl.
File metadata
- Download URL: bitpulse-0.1.2-py3-none-any.whl
- Upload date:
- Size: 55.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
71d405ae0d646a799e94fe0567b183834e9d8aa6da999d57d2bc8422477f41ae
|
|
| MD5 |
91acbb8655b67d07d871a06e2d23a30f
|
|
| BLAKE2b-256 |
e3c47c86cb4046ba067b84b64dfb0443e2817bdbd23860257f211b6be3361b15
|