OpenTelemetry SDK for Monte Carlo
Project description
Monte Carlo OpenTelemetry SDK
This library provides a Python SDK for tracing applications with OpenTelemetry for use with Monte Carlo's AI Observability solution.
To evaluate the effectiveness of AI agents, the first step is capturing the prompts sent to an LLM and the completions returned. The next challenge is categorizing these LLM calls, since different types of LLM calls require different evaluation approaches.
This SDK not only streamlines OpenTelemetry tracing setup, but it also makes it easy to add custom attributes to spans, enabling you to filter and select different subsets of spans for evaluation.
This is alpha software. The API is subject to change.
Installation
Install the SDK
Requires Python 3.10 or later.
$ pip install montecarlo-opentelemetry
Install the instrumentation package(s) for the AI libraries you want to trace.
The Monte Carlo SDK can work with existing instrumentation for AI libraries to capture traces automatically. Choose the instrumentation library that matches the library you are using.
# For Langchain/LangGraph
$ pip install "opentelemetry-instrumentation-langchain<=0.53.4"
# For OpenAI
$ pip install "opentelemetry-instrumentation-openai<=0.53.4"
See a selection of available instrumentation libraries below.
Quick Start
Set up Tracing in Your Application
# Import the Monte Carlo SDK.
import montecarlo_opentelemetry as mc
# Import the AI client library (Anthropic in this example.)
from anthropic import Anthropic
# Import the corresponding instrumentation library.
from opentelemetry.instrumentation.anthropic import AnthropicInstrumentor
# Create an Instrumentor object
anthropic_instrumentor = AnthropicInstrumentor()
# Set up tracing.
mc.setup(
agent_name="my-agent",
otlp_endpoint="http://localhost:4318/v1/traces",
instrumentors=[anthropic_instrumentor],
)
# Use decorator to add a Monte Carlo workflow attribute.
@mc.trace_with_workflow("parent-function", "my-workflow")
def parent():
child()
# Use decorator to add a Monte Carlo task attribute.
@mc.trace_with_task("child-function", "my-task")
def child():
message = Anthropic().messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Hello world!",
}
],
model="claude-sonnet-4-20250514",
)
Add MonteCarlo API credentials as environment variables
If you are sending OpenTelemetry traces to a Monte Carlo OTLP Ingestion endpoint, you will need to add your Monte Carlo API credentials as environment variables when running your application. See our docs on generating API keys.
There are two different ways to add your Monte Carlo API credentials as environment variables. Choose the option that best fits your needs.
-
Use Monte Carlo environment variables. Similar to how environment variables are used for the Pycarlo SDK, you can set separate environment variables for your Monte Carlo API ID and API token.
$ export MCD_DEFAULT_API_ID=<your-api-id> $ export MCD_DEFAULT_API_TOKEN=<your-api-token>
-
Use the standard OpenTelemetry environment variable for headers. You can add both your Monte Carlo API ID and API token to the
OTEL_EXPORTER_OTLP_HEADERSenvironment variable. The value of this environment variable should be a comma-separated list ofkey=valuepairs, where the keys are:x-mcd-idandx-mcd-token.$ export OTEL_EXPORTER_OTLP_HEADERS="x-mcd-id=<your-api-id>,x-mcd-token=<your-api-token>"
Example
Example Application
To see how the Monte Carlo SDK can be used to add identifying attributes to spans, let's look at an example application that's slighly larger than the Quick Start one.
This is a simplified "Travel Assistant" agent that can make hotel and flight reservations.
# Fake LLM library that can make a call to an LLM.
from example.LLMLibrary import call_llm
def travel_assistant():
flight_assistant()
hotel_assistant()
def flight_assistant():
plan_flight()
book_flight()
def plan_flight():
pass
def book_flight():
call_llm()
def hotel_assistant():
search_for_hotel()
book_hotel()
def search_for_hotel():
call_llm()
def book_hotel():
call_llm()
If we traced each function in this application, the structure of the trace would look like this:
travel_assistant
├── flight_assistant
│ ├── plan_flight
│ └── book_flight
│ └── call_llm
└── hotel_assistant
├── search_for_hotel
│ └── call_llm
└── book_hotel
└── call_llm
In order to differentiate between different types of LLM calls, it would be helpful to add identifying attributes to spans. That way we could tell if an LLM call was part of a workflow managed by the flight assistant, or if it was part of the hotel booking task in the workflow managed by the hotel assistant.
Adding Attributes with the Monte Carlo SDK
Let's see how we can use the Monte Carlo SDK to enhance the tracing data with identifying attributes.
import montecarlo_opentelemetry as mc
# Fake LLM library that can make a call to an LLM.
from example.LLMLibrary import call_llm
# Fake LLM library instrumentation that will automatically create spans
# each time call_llm() is called.
from example.LLMLibrary.instrumentation import LLMInstrumentor
mc.setup(
agent_name="travel-assistant",
otlp_endpoint="http://localhost:4318/v1/traces",
instrumentors=[LLMInstrumentor()],
)
@mc.trace_with_tags(span_name="travel_assistant", tags=["travel", "v1"])
def travel_assistant():
flight_assistant()
hotel_assistant()
@mc.trace_with_workflow(span_name="flight_assistant", workflow_name="flight")
def flight_assistant():
plan_flight()
book_flight()
@mc.trace_with_task(span_name="plan_flight", task_name="plan")
def plan_flight():
pass
@mc.trace_with_task(span_name="book_flight", task_name="book")
def book_flight():
call_llm()
@mc.trace_with_workflow(span_name="hotel_assistant", workflow_name="hotel")
def hotel_assistant():
search_for_hotel()
book_hotel()
@mc.trace_with_task(span_name="search_for_hotel", task_name="search")
def search_for_hotel():
call_llm()
# Arguments can also be passed positionally.
@mc.trace_with_task("book_hotel", "book")
def book_hotel():
call_llm()
Because montecarlo.* attributes propagate from parent to child spans, the call_llm spans will contain all of the montecarlo.* attributes were added to spans that occur above it in the trace hierarchy.
For example, the call_llm span for book_flight will not only have the montecarlo.task = "book" attribute that we added directly, but also the montecarlo.workflow = "flight" added on the flight_assistant span, and the montecarlo.tags = "travel,v1" attributes added on the travel_assistant span.
That results in the following trace structure:
travel_assistant <-- montecarlo.tags = "travel,v1"
│
├── flight_assistant <-- montecarlo.workflow = "flight"
| | <-- montecarlo.tags = "travel,v1"
| |
│ ├── plan_flight <-- montecarlo.task = "plan"
| | <-- montecarlo.workflow = "flight"
| | <-- montecarlo.tags = "travel,v1"
| |
│ └── book_flight <-- montecarlo.task = "book"
│ | <-- montecarlo.workflow = "flight"
│ | <-- montecarlo.tags = "travel,v1"
│ |
│ └── call_llm <-- montecarlo.task = "book"
│ <-- montecarlo.workflow = "flight"
│ <-- montecarlo.tags = "travel,v1"
│
└── hotel_assistant <-- montecarlo.workflow = "hotel"
| <-- montecarlo.tags = "travel,v1"
|
├── search_for_hotel <-- montecarlo.task = "search"
│ | <-- montecarlo.workflow = "hotel"
│ | <-- montecarlo.tags = "travel,v1"
│ |
│ └── call_llm <-- montecarlo.task = "search"
│ <-- montecarlo.workflow = "hotel"
│ <-- montecarlo.tags = "travel,v1"
│
└── book_hotel <-- montecarlo.task = "book"
| <-- montecarlo.workflow = "hotel"
| <-- montecarlo.tags = "travel,v1"
|
└── call_llm <-- montecarlo.task = "book"
<-- montecarlo.workflow = "hotel"
<-- montecarlo.tags = "travel,v1"
Tracing LLM Calls Manually
Typically, an instrumentation library will be used to automatically trace LLM calls. When that's not possible, the create_llm_span context manager can be used to create a span for the LLM call manually.
The create_llm_span context manager will set request-related attributes. Since provider, model, operation, and prompts are known before the LLM call is made, they should be passed to the context manager so that the appropriate span attributes can be added automatically. Response-related attributes need to be added with the helper functions after the LLM call.
It is possible to record a list of prompts as attributes that is different than the prompts sent to the LLM. If you have sensitive data that should not be recorded as span attributes, you can pass a modified list of prompts to create_llm_span, and then pass the un-redacted prompts to the LLM.
import montecarlo_opentelemetry as mc
# Fake LLM library that can make a call to an LLM.
from example.LLMLibrary import call_llm
prompts_to_record = [
{"role": "system", "content": "You are a world-class greeter."},
{"role": "user", "content": "Say hello to Bob."},
{"role": "assistant", "content": "Hello Bob!"},
]
prompts_to_send = [
{"role": "system", "content": "You are a world-class greeter."},
{"role": "user", "content": "Say hello to Bob. Use SENSITIVE DATA."},
{"role": "assistant", "content": "Hello Bob!"},
]
with mc.create_llm_span(
span_name="example-span",
provider="llm-provider",
model="llm-model",
operation="chat",
prompts_to_record=prompts_to_record,
) as span:
# Make LLM call.
#
# We are sending un-redacted prompts to the LLM. The LLM will see
# "SENSITIVE DATA", but it won't be recorded as a span attribute.
resp = call_llm(prompts_to_send)
# Add response attributes to span.
#
# Assume that the response object has attributes like model, completions, etc.
mc.add_llm_response_model(span, resp.model)
mc.add_llm_completions(span, resp.completions)
mc.add_llm_tokens(
span,
resp.prompt_tokens,
resp.completion_tokens,
resp.total_tokens,
resp.cache_creation_input_tokens,
resp.cache_read_input_tokens,
)
License
Apache 2.0 - See the LICENSE for more information.
Security
See SECURITY.md for more information.
Available Instrumentation Packages
Note: Some packages have version constraints. See Version Compatibility below.
- opentelemetry-instrumentation-agno
- opentelemetry-instrumentation-alephalpha
- opentelemetry-instrumentation-anthropic
- opentelemetry-instrumentation-bedrock
- opentelemetry-instrumentation-chromadb
- opentelemetry-instrumentation-cohere
- opentelemetry-instrumentation-crewai
- opentelemetry-instrumentation-google-generativeai
- opentelemetry-instrumentation-groq
- opentelemetry-instrumentation-haystack
- opentelemetry-instrumentation-lancedb
- opentelemetry-instrumentation-langchain
- opentelemetry-instrumentation-llamaindex
- opentelemetry-instrumentation-marqo
- opentelemetry-instrumentation-mcp
- opentelemetry-instrumentation-milvus
- opentelemetry-instrumentation-mistralai
- opentelemetry-instrumentation-ollama
- opentelemetry-instrumentation-openai
- opentelemetry-instrumentation-openai-agents
- opentelemetry-instrumentation-pinecone
- opentelemetry-instrumentation-qdrant
- opentelemetry-instrumentation-replicate
- opentelemetry-instrumentation-sagemaker
- opentelemetry-instrumentation-together
- opentelemetry-instrumentation-transformers
- opentelemetry-instrumentation-vertexai
- opentelemetry-instrumentation-voyageai
- opentelemetry-instrumentation-watsonx
- opentelemetry-instrumentation-weaviate
- opentelemetry-instrumentation-writer
Version Compatibility
Newer versions of some instrumentation packages have adopted a different attribute format that will be supported in a future release.
Known incompatible versions:
| Package | Last compatible version | First incompatible version |
|---|---|---|
opentelemetry-instrumentation-anthropic |
0.53.4 | 0.54.0 |
opentelemetry-instrumentation-openai |
0.53.4 | 0.55.0 |
opentelemetry-instrumentation-langchain |
0.53.4 | 0.55.0 |
opentelemetry-instrumentation-crewai |
0.55.4 | 0.56.0 |
opentelemetry-instrumentation-bedrock |
0.56.1 | 0.57.0 |
For other instrumentation packages, version 0.57.0 or earlier is recommended as they may adopt the new format in future releases.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file montecarlo_opentelemetry-0.3.1.tar.gz.
File metadata
- Download URL: montecarlo_opentelemetry-0.3.1.tar.gz
- Upload date:
- Size: 10.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e4d268939e3c7a4349b8245c0254b2e1e451051e58805d23c8728db89a865a45
|
|
| MD5 |
267eff709afc992a0ecad90357bc0add
|
|
| BLAKE2b-256 |
0588ab8345617d1a464e8004578d41f32a2686b8c1408a78581bc53deb443478
|
File details
Details for the file montecarlo_opentelemetry-0.3.1-py3-none-any.whl.
File metadata
- Download URL: montecarlo_opentelemetry-0.3.1-py3-none-any.whl
- Upload date:
- Size: 11.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4019efe221c7351bc7a58f7ffffacd7f063d6bfd4464ebc3402a56626ec2befa
|
|
| MD5 |
b6df080f3207f07a95fb052acdd505f6
|
|
| BLAKE2b-256 |
a167d6ac56f90804407d3a67c67f9ad733766e177983e117cdeecaf19e498558
|