Skip to main content

Microsoft Corporation Azure Onlineexperimentation Client Library for Python

Project description

Azure Online Experimentation client library for Python

This package contains Azure Online Experimentation client library for interacting with Microsoft.OnlineExperimentation/workspaces resources.

Getting started

Install the package

python -m pip install azure-onlineexperimentation

Prequisites

Create and authenticate the client

The Azure Online Experimentation client library initialization requires two parameters:

To construct a synchronous client:

import os
from azure.identity import DefaultAzureCredential
from azure.onlineexperimentation import OnlineExperimentationClient

# Create a client with your Online Experimentation workspace endpoint and credentials
endpoint = os.environ["AZURE_ONLINEEXPERIMENTATION_ENDPOINT"]
client = OnlineExperimentationClient(endpoint, DefaultAzureCredential())
print(f"Client initialized with endpoint: {endpoint}")

To construct an asynchronous client, instead import OnlineExperimentationClient from azure.onlineexperimentation.aio and DefaultAzureCredential from azure.identity.aio namespaces:

import os
from azure.identity.aio import DefaultAzureCredential
from azure.onlineexperimentation.aio import OnlineExperimentationClient

# Create a client with your Online Experimentation workspace endpoint and credentials
endpoint = os.environ["AZURE_ONLINEEXPERIMENTATION_ENDPOINT"]
client = OnlineExperimentationClient(endpoint, DefaultAzureCredential())
print(f"Client initialized with endpoint: {endpoint}")

Key concepts

Online Experimentation Workspace

[Microsoft.OnlineExperimentation/workspaces][az_exp_workspace] Azure resources work in conjunction with Azure App Configuration and Azure Monitor. The Online Experimentation workspace handles management of metrics definitions and their continuous computation to monitor and evaluate experiment results.

Experiment Metrics

Metrics are used to measure the impact of your online experiments. See the samples for how to create and manage various types of experiment metrics.

Troubleshooting

Errors can occur during initial requests and will provide information about how to resolve the error.

Examples

This examples goes theough the experiment metric management lifecycle, to run the example:

  • Set AZURE_ONLINEEXPERIMENTATION_ENDPOINT environment variable to the endpoint property value (URL) from a [Microsoft.OnlineExperimentation/workspaces][az_exp_workspace] resource.
  • Enable DefaultAzureCredential by running az login or Connect-AzAccount, see documentation for details and troubleshooting.
import os
import random
import json
from azure.identity import DefaultAzureCredential
from azure.onlineexperimentation import OnlineExperimentationClient
from azure.onlineexperimentation.models import (
    ExperimentMetric,
    LifecycleStage,
    DesiredDirection,
    UserRateMetricDefinition,
    ObservedEvent,
)
from azure.core.exceptions import HttpResponseError

# [Step 1] Initialize the SDK client
# The endpoint URL from the Microsoft.OnlineExperimentation/workspaces resource
endpoint = os.environ.get("AZURE_ONLINEEXPERIMENTATION_ENDPOINT", "<endpoint-not-set>")
credential = DefaultAzureCredential()

print(f"AZURE_ONLINEEXPERIMENTATION_ENDPOINT is {endpoint}")

client = OnlineExperimentationClient(endpoint=endpoint, credential=credential)

# [Step 2] Define the experiment metric
example_metric = ExperimentMetric(
    lifecycle=LifecycleStage.ACTIVE,
    display_name="% users with LLM interaction who made a high-value purchase",
    description="Percentage of users who received a response from the LLM and then made a purchase of $100 or more",
    categories=["Business"],
    desired_direction=DesiredDirection.INCREASE,
    definition=UserRateMetricDefinition(
        start_event=ObservedEvent(event_name="ResponseReceived"),
        end_event=ObservedEvent(event_name="Purchase", filter="Revenue > 100"),
    ),
)

# [Optional][Step 2a] Validate the metric - checks for input errors without persisting anything
print("Checking if the experiment metric definition is valid...")
print(json.dumps(example_metric.as_dict(), indent=2))

try:
    validation_result = client.validate_metric(example_metric)

    print(f"Experiment metric definition valid: {validation_result.is_valid}.")
    for detail in validation_result.diagnostics or []:
        # Inspect details of why the metric definition was rejected as Invalid
        print(f"- {detail.code}: {detail.message}")

    # [Step 3] Create the experiment metric
    example_metric_id = f"sample_metric_id_{random.randint(10000, 20000)}"

    print(f"Creating the experiment metric {example_metric_id}...")
    # Using upsert to create the metric with If-None-Match header
    create_response = client.create_or_update_metric(
        experiment_metric_id=example_metric_id,
        resource=example_metric,
        match_condition=None,  # This ensures If-None-Match: * header is sent
        etag=None,
    )

    print(f"Experiment metric {create_response.id} created, etag: {create_response.etag}.")

    # [Step 4] Deactivate the experiment metric and update the description
    updated_metric = {
        "lifecycle": LifecycleStage.INACTIVE,  # pauses computation of this metric
        "description": "No longer need to compute this.",
    }

    update_response = client.create_or_update_metric(
        experiment_metric_id=example_metric_id,
        resource=updated_metric,
        etag=create_response.etag,  # Ensures If-Match header is sent
        match_condition=None,  # Not specifying match_condition as we're using etag
    )

    print(f"Updated metric: {update_response.id}, etag: {update_response.etag}.")

    # [Step 5] Delete the experiment metric
    client.delete_metric(
        experiment_metric_id=example_metric_id, etag=update_response.etag  # Ensures If-Match header is sent
    )

    print(f"Deleted metric: {example_metric_id}.")

except HttpResponseError as error:
    print(f"The operation failed with error: {error}")

Next steps

Have a look at the samples folder, containing fully runnable Python code for synchronous and asynchronous clients.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information, see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

azure_onlineexperimentation-1.0.0b1.tar.gz (87.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

azure_onlineexperimentation-1.0.0b1-py3-none-any.whl (90.0 kB view details)

Uploaded Python 3

File details

Details for the file azure_onlineexperimentation-1.0.0b1.tar.gz.

File metadata

File hashes

Hashes for azure_onlineexperimentation-1.0.0b1.tar.gz
Algorithm Hash digest
SHA256 bb98c5ea6f8ebd24ec50355a83104ec01167184f85a7a196ba08acbeea48e4c4
MD5 a8659c7776c4fd63e600f6132fec7813
BLAKE2b-256 e61b9f113dc48633d24f7c27827c296420ecf0994c592c44d2813fc223cb3be2

See more details on using hashes here.

File details

Details for the file azure_onlineexperimentation-1.0.0b1-py3-none-any.whl.

File metadata

File hashes

Hashes for azure_onlineexperimentation-1.0.0b1-py3-none-any.whl
Algorithm Hash digest
SHA256 d0555ed07f3d7f25d1e0f870dca771f952afa83919d5371e443f5afac3843182
MD5 27f630b9bb085a8f12f9366ca49fd83d
BLAKE2b-256 9d66f68d9d799b225e580868cf92735c553f00d6a9b36d9bf2dd42102e17b26c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page