Skip to main content

OpenTelemetry SDK for Monte Carlo

Project description

Monte Carlo OpenTelemetry SDK

This library provides a Python SDK for tracing applications with OpenTelemetry for use with Monte Carlo's AI Observability solution.

To evaluate the effectiveness of AI agents, the first step is capturing the prompts sent to an LLM and the completions returned. The next challenge is categorizing these LLM calls, since different types of LLM calls require different evaluation approaches.

This SDK not only streamlines OpenTelemetry tracing setup, but it also makes it easy to add custom attributes to spans, enabling you to filter and select different subsets of spans for evaluation.

This is alpha software. The API is subject to change.

Installation

Install the SDK

Requires Python 3.10 or later.

$ pip install montecarlo-opentelemetry

Install the instrumentation package(s) for the AI libraries you want to trace.

The Monte Carlo SDK can work with existing instrumentation for AI libraries to capture traces automatically. Choose the instrumentation library that matches the library you are using.

# For Langchain/LangGraph
$ pip install "opentelemetry-instrumentation-langchain<=0.53.4"

# For OpenAI
$ pip install "opentelemetry-instrumentation-openai<=0.53.4"

See a selection of available instrumentation libraries below.

Quick Start

Set up Tracing in Your Application

# Import the Monte Carlo SDK.
import montecarlo_opentelemetry as mc

# Import the AI client library (Anthropic in this example.)
from anthropic import Anthropic

# Import the corresponding instrumentation library.
from opentelemetry.instrumentation.anthropic import AnthropicInstrumentor

# Create an Instrumentor object
anthropic_instrumentor = AnthropicInstrumentor()

# Set up tracing.
mc.setup(
    agent_name="my-agent",
    otlp_endpoint="http://localhost:4318/v1/traces",
    instrumentors=[anthropic_instrumentor],
)

# Use decorator to add a Monte Carlo workflow attribute.
@mc.trace_with_workflow("parent-function", "my-workflow")
def parent():
    child()

# Use decorator to add a Monte Carlo task attribute.
@mc.trace_with_task("child-function", "my-task")
def child():
    message = Anthropic().messages.create(
        max_tokens=1024,
        messages=[
            {
                "role": "user",
                "content": "Hello world!",
            }
        ],
        model="claude-sonnet-4-20250514",
    )

Add MonteCarlo API credentials as environment variables

If you are sending OpenTelemetry traces to a Monte Carlo OTLP Ingestion endpoint, you will need to add your Monte Carlo API credentials as environment variables when running your application. See our docs on generating API keys.

There are two different ways to add your Monte Carlo API credentials as environment variables. Choose the option that best fits your needs.

  1. Use Monte Carlo environment variables. Similar to how environment variables are used for the Pycarlo SDK, you can set separate environment variables for your Monte Carlo API ID and API token.

    $ export MCD_DEFAULT_API_ID=<your-api-id>
    $ export MCD_DEFAULT_API_TOKEN=<your-api-token>
    
  2. Use the standard OpenTelemetry environment variable for headers. You can add both your Monte Carlo API ID and API token to the OTEL_EXPORTER_OTLP_HEADERS environment variable. The value of this environment variable should be a comma-separated list of key=value pairs, where the keys are: x-mcd-id and x-mcd-token.

    $ export OTEL_EXPORTER_OTLP_HEADERS="x-mcd-id=<your-api-id>,x-mcd-token=<your-api-token>"
    

Example

Example Application

To see how the Monte Carlo SDK can be used to add identifying attributes to spans, let's look at an example application that's slighly larger than the Quick Start one.

This is a simplified "Travel Assistant" agent that can make hotel and flight reservations.

# Fake LLM library that can make a call to an LLM.
from example.LLMLibrary import call_llm

def travel_assistant():
    flight_assistant()
    hotel_assistant()

def flight_assistant():
    plan_flight()
    book_flight()

def plan_flight():
    pass

def book_flight():
    call_llm()

def hotel_assistant():
    search_for_hotel()
    book_hotel()

def search_for_hotel():
    call_llm()

def book_hotel():
    call_llm()

If we traced each function in this application, the structure of the trace would look like this:

travel_assistant
├── flight_assistant
│   ├── plan_flight
│   └── book_flight
│       └── call_llm
└── hotel_assistant
    ├── search_for_hotel
    │   └── call_llm
    └── book_hotel
        └── call_llm

In order to differentiate between different types of LLM calls, it would be helpful to add identifying attributes to spans. That way we could tell if an LLM call was part of a workflow managed by the flight assistant, or if it was part of the hotel booking task in the workflow managed by the hotel assistant.

Adding Attributes with the Monte Carlo SDK

Let's see how we can use the Monte Carlo SDK to enhance the tracing data with identifying attributes.

import montecarlo_opentelemetry as mc

# Fake LLM library that can make a call to an LLM.
from example.LLMLibrary import call_llm

# Fake LLM library instrumentation that will automatically create spans
# each time call_llm() is called.
from example.LLMLibrary.instrumentation import LLMInstrumentor

mc.setup(
    agent_name="travel-assistant",
    otlp_endpoint="http://localhost:4318/v1/traces",
    instrumentors=[LLMInstrumentor()],
)

@mc.trace_with_tags(span_name="travel_assistant", tags=["travel", "v1"])
def travel_assistant():
    flight_assistant()
    hotel_assistant()

@mc.trace_with_workflow(span_name="flight_assistant", workflow_name="flight")
def flight_assistant():
    plan_flight()
    book_flight()

@mc.trace_with_task(span_name="plan_flight", task_name="plan")
def plan_flight():
    pass

@mc.trace_with_task(span_name="book_flight", task_name="book")
def book_flight():
    call_llm()

@mc.trace_with_workflow(span_name="hotel_assistant", workflow_name="hotel")
def hotel_assistant():
    search_for_hotel()
    book_hotel()

@mc.trace_with_task(span_name="search_for_hotel", task_name="search")
def search_for_hotel():
    call_llm()

# Arguments can also be passed positionally.
@mc.trace_with_task("book_hotel", "book")
def book_hotel():
    call_llm()

Because montecarlo.* attributes propagate from parent to child spans, the call_llm spans will contain all of the montecarlo.* attributes were added to spans that occur above it in the trace hierarchy.

For example, the call_llm span for book_flight will not only have the montecarlo.task = "book" attribute that we added directly, but also the montecarlo.workflow = "flight" added on the flight_assistant span, and the montecarlo.tags = "travel,v1" attributes added on the travel_assistant span.

That results in the following trace structure:

travel_assistant            <-- montecarlo.tags = "travel,v1"
│
├── flight_assistant        <-- montecarlo.workflow = "flight"
|   |                       <-- montecarlo.tags = "travel,v1"
|   |
│   ├── plan_flight         <-- montecarlo.task = "plan"
|   |                       <-- montecarlo.workflow = "flight"
|   |                       <-- montecarlo.tags = "travel,v1"
|   |
│   └── book_flight         <-- montecarlo.task = "book"
│       |                   <-- montecarlo.workflow = "flight"
│       |                   <-- montecarlo.tags = "travel,v1"
│       |
│       └── call_llm        <-- montecarlo.task = "book"
│                           <-- montecarlo.workflow = "flight"
│                           <-- montecarlo.tags = "travel,v1"
│
└── hotel_assistant         <-- montecarlo.workflow = "hotel"
    |                       <-- montecarlo.tags = "travel,v1"
    |
    ├── search_for_hotel    <-- montecarlo.task = "search"
    │   |                   <-- montecarlo.workflow = "hotel"
    │   |                   <-- montecarlo.tags = "travel,v1"
    │   |
    │   └── call_llm        <-- montecarlo.task = "search"
    │                       <-- montecarlo.workflow = "hotel"
    │                       <-- montecarlo.tags = "travel,v1"
    │
    └── book_hotel          <-- montecarlo.task = "book"
        |                   <-- montecarlo.workflow = "hotel"
        |                   <-- montecarlo.tags = "travel,v1"
        |
        └── call_llm        <-- montecarlo.task = "book"
                            <-- montecarlo.workflow = "hotel"
                            <-- montecarlo.tags = "travel,v1"

Tracing LLM Calls Manually

Typically, an instrumentation library will be used to automatically trace LLM calls. When that's not possible, the create_llm_span context manager can be used to create a span for the LLM call manually.

The create_llm_span context manager will set request-related attributes. Since provider, model, operation, and prompts are known before the LLM call is made, they should be passed to the context manager so that the appropriate span attributes can be added automatically. Response-related attributes need to be added with the helper functions after the LLM call.

It is possible to record a list of prompts as attributes that is different than the prompts sent to the LLM. If you have sensitive data that should not be recorded as span attributes, you can pass a modified list of prompts to create_llm_span, and then pass the un-redacted prompts to the LLM.

import montecarlo_opentelemetry as mc

# Fake LLM library that can make a call to an LLM.
from example.LLMLibrary import call_llm

prompts_to_record = [
    {"role": "system", "content": "You are a world-class greeter."},
    {"role": "user", "content": "Say hello to Bob."},
    {"role": "assistant", "content": "Hello Bob!"},
]

prompts_to_send = [
    {"role": "system", "content": "You are a world-class greeter."},
    {"role": "user", "content": "Say hello to Bob. Use SENSITIVE DATA."},
    {"role": "assistant", "content": "Hello Bob!"},
]

with mc.create_llm_span(
    span_name="example-span",
    provider="llm-provider",
    model="llm-model",
    operation="chat",
    prompts_to_record=prompts_to_record,
) as span:
    # Make LLM call.
    #
    # We are sending un-redacted prompts to the LLM. The LLM will see
    # "SENSITIVE DATA", but it won't be recorded as a span attribute.
    resp = call_llm(prompts_to_send)

    # Add response attributes to span.
    #
    # Assume that the response object has attributes like model, completions, etc.
    mc.add_llm_response_model(span, resp.model)
    mc.add_llm_completions(span, resp.completions)
    mc.add_llm_tokens(
        span,
        resp.prompt_tokens,
        resp.completion_tokens,
        resp.total_tokens,
        resp.cache_creation_input_tokens,
        resp.cache_read_input_tokens,
    )

License

Apache 2.0 - See the LICENSE for more information.

Security

See SECURITY.md for more information.

Available Instrumentation Packages

Note: Some packages have version constraints. See Version Compatibility below.

Version Compatibility

Newer versions of some instrumentation packages have adopted a different attribute format that will be supported in a future release.

Known incompatible versions:

Package Last compatible version First incompatible version
opentelemetry-instrumentation-anthropic 0.53.4 0.54.0
opentelemetry-instrumentation-openai 0.53.4 0.55.0
opentelemetry-instrumentation-langchain 0.53.4 0.55.0
opentelemetry-instrumentation-crewai 0.55.4 0.56.0
opentelemetry-instrumentation-bedrock 0.56.1 0.57.0

For other instrumentation packages, version 0.57.0 or earlier is recommended as they may adopt the new format in future releases.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

montecarlo_opentelemetry-0.3.1.tar.gz (10.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

montecarlo_opentelemetry-0.3.1-py3-none-any.whl (11.8 kB view details)

Uploaded Python 3

File details

Details for the file montecarlo_opentelemetry-0.3.1.tar.gz.

File metadata

File hashes

Hashes for montecarlo_opentelemetry-0.3.1.tar.gz
Algorithm Hash digest
SHA256 e4d268939e3c7a4349b8245c0254b2e1e451051e58805d23c8728db89a865a45
MD5 267eff709afc992a0ecad90357bc0add
BLAKE2b-256 0588ab8345617d1a464e8004578d41f32a2686b8c1408a78581bc53deb443478

See more details on using hashes here.

File details

Details for the file montecarlo_opentelemetry-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for montecarlo_opentelemetry-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4019efe221c7351bc7a58f7ffffacd7f063d6bfd4464ebc3402a56626ec2befa
MD5 b6df080f3207f07a95fb052acdd505f6
BLAKE2b-256 a167d6ac56f90804407d3a67c67f9ad733766e177983e117cdeecaf19e498558

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page