Skip to main content

Microsoft Azure Evaluation Library for Python

Project description

Azure AI Evaluation client library for Python

Use Azure AI Evaluation SDK to assess the performance of your generative AI applications. Generative AI application generations are quantitatively measured with mathematical based metrics, AI-assisted quality and safety metrics. Metrics are defined as evaluators. Built-in or custom evaluators can provide comprehensive insights into the application's capabilities and limitations.

Use Azure AI Evaluation SDK to:

  • Evaluate existing data from generative AI applications
  • Evaluate generative AI applications
  • Evaluate by generating mathematical, AI-assisted quality and safety metrics

Azure AI SDK provides following to evaluate Generative AI Applications:

  • Evaluators - Generate scores individually or when used together with evaluate API.
  • Evaluate API - Python API to evaluate dataset or application using built-in or custom evaluators.

Source code | Package (PyPI) | API reference documentation | Product documentation | Samples

Getting started

Prerequisites

Install the package

Install the Azure AI Evaluation SDK for Python with pip:

pip install azure-ai-evaluation

If you want to track results in AI Studio, install remote extra:

pip install azure-ai-evaluation[remote]

Key concepts

Evaluators

Evaluators are custom or prebuilt classes or functions that are designed to measure the quality of the outputs from language models or generative AI applications.

Built-in evaluators

Built-in evaluators are out of box evaluators provided by Microsoft:

Category Evaluator class
Performance and quality (AI-assisted) GroundednessEvaluator, RelevanceEvaluator, CoherenceEvaluator, FluencyEvaluator, SimilarityEvaluator, RetrievalEvaluator
Performance and quality (NLP) F1ScoreEvaluator, RougeScoreEvaluator, GleuScoreEvaluator, BleuScoreEvaluator, MeteorScoreEvaluator
Risk and safety (AI-assisted) ViolenceEvaluator, SexualEvaluator, SelfHarmEvaluator, HateUnfairnessEvaluator, IndirectAttackEvaluator, ProtectedMaterialEvaluator
Composite QAEvaluator, ContentSafetyEvaluator

For more in-depth information on each evaluator definition and how it's calculated, see Evaluation and monitoring metrics for generative AI.

import os

from azure.ai.evaluation import evaluate, RelevanceEvaluator, ViolenceEvaluator, BleuScoreEvaluator

# NLP bleu score evaluator
bleu_score_evaluator = BleuScoreEvaluator()
result = bleu_score(
    response="Tokyo is the capital of Japan.",
    ground_truth="The capital of Japan is Tokyo."
)

# AI assisted quality evaluator
model_config = {
    "azure_endpoint": os.environ.get("AZURE_OPENAI_ENDPOINT"),
    "api_key": os.environ.get("AZURE_OPENAI_API_KEY"),
    "azure_deployment": os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
}

relevance_evaluator = RelevanceEvaluator(model_config)
result = relevance_evaluator(
    query="What is the capital of Japan?",
    response="The capital of Japan is Tokyo."
)

# AI assisted safety evaluator
azure_ai_project = {
    "subscription_id": "<subscription_id>",
    "resource_group_name": "<resource_group_name>",
    "project_name": "<project_name>",
}

violence_evaluator = ViolenceEvaluator(azure_ai_project)
result = violence_evaluator(
    query="What is the capital of France?",
    response="Paris."
)

Custom evaluators

Built-in evaluators are great out of the box to start evaluating your application's generations. However you can build your own code-based or prompt-based evaluator to cater to your specific evaluation needs.

# Custom evaluator as a function to calculate response length
def response_length(response, **kwargs):
    return len(response)

# Custom class based evaluator to check for blocked words
class BlocklistEvaluator:
    def __init__(self, blocklist):
        self._blocklist = blocklist

    def __call__(self, *, response: str, **kwargs):
        score = any([word in answer for word in self._blocklist])
        return {"score": score}

blocklist_evaluator = BlocklistEvaluator(blocklist=["bad, worst, terrible"])

result = response_length("The capital of Japan is Tokyo.")
result = blocklist_evaluator(answer="The capital of Japan is Tokyo.")

Evaluate API

The package provides an evaluate API which can be used to run multiple evaluators together to evaluate generative AI application response.

Evaluate existing dataset

from azure.ai.evaluation import evaluate

result = evaluate(
    data="data.jsonl", # provide your data here
    evaluators={
        "blocklist": blocklist_evaluator,
        "relevance": relevance_evaluator
    },
    # column mapping
    evaluator_config={
        "relevance": {
            "column_mapping": {
                "query": "${data.queries}"
                "ground_truth": "${data.ground_truth}"
                "response": "${outputs.response}"
            } 
        }
    }
    # Optionally provide your AI Studio project information to track your evaluation results in your Azure AI Studio project
    azure_ai_project = azure_ai_project,
    # Optionally provide an output path to dump a json of metric summary, row level data and metric and studio URL
    output_path="./evaluation_results.json"
)

For more details refer to Evaluate on test dataset using evaluate()

Evaluate generative AI application

from askwiki import askwiki

result = evaluate(
    data="data.jsonl",
    target=askwiki,
    evaluators={
        "relevance": relevance_eval
    },
    evaluator_config={
        "default": {
            "column_mapping": {
                "query": "${data.queries}"
                "context": "${outputs.context}"
                "response": "${outputs.response}"
            } 
        }
    }
)

Above code snippet refers to askwiki application in this sample.

For more details refer to Evaluate on a target

Simulator

Simulators allow users to generate synthentic data using their application. Simulator expects the user to have a callback method that invokes their AI application. The intergration between your AI application and the simulator happens at the callback method. Here's how a sample callback would look like:

async def callback(
    messages: Dict[str, List[Dict]],
    stream: bool = False,
    session_state: Any = None,
    context: Optional[Dict[str, Any]] = None,
) -> dict:
    messages_list = messages["messages"]
    # Get the last message from the user
    latest_message = messages_list[-1]
    query = latest_message["content"]
    # Call your endpoint or AI application here
    # response should be a string
    response = call_to_your_application(query, messages_list, context)
    formatted_response = {
        "content": response,
        "role": "assistant",
        "context": "",
    }
    messages["messages"].append(formatted_response)
    return {"messages": messages["messages"], "stream": stream, "session_state": session_state, "context": context}

The simulator initialization and invocation looks like this:

from azure.ai.evaluation.simulator import Simulator
model_config = {
    "azure_endpoint": os.environ.get("AZURE_ENDPOINT"),
    "azure_deployment": os.environ.get("AZURE_DEPLOYMENT_NAME"),
    "api_version": os.environ.get("AZURE_API_VERSION"),
}
custom_simulator = Simulator(model_config=model_config)
outputs = asyncio.run(custom_simulator(
    target=callback,
    conversation_turns=[
        [
            "What should I know about the public gardens in the US?",
        ],
        [
            "How do I simulate data against LLMs",
        ],
    ],
    max_conversation_turns=2,
))
with open("simulator_output.jsonl", "w") as f:
    for output in outputs:
        f.write(output.to_eval_qr_json_lines())

Adversarial Simulator

from azure.ai.evaluation.simulator import AdversarialSimulator, AdversarialScenario
from azure.identity import DefaultAzureCredential
azure_ai_project = {
    "subscription_id": <subscription_id>,
    "resource_group_name": <resource_group_name>,
    "project_name": <project_name>
}
scenario = AdversarialScenario.ADVERSARIAL_QA
simulator = AdversarialSimulator(azure_ai_project=azure_ai_project, credential=DefaultAzureCredential())

outputs = asyncio.run(
    simulator(
        scenario=scenario,
        max_conversation_turns=1,
        max_simulation_results=3,
        target=callback
    )
)

print(outputs.to_eval_qr_json_lines())

For more details about the simulator, visit the following links:

Examples

In following section you will find examples of:

More examples can be found here.

Troubleshooting

General

Please refer to troubleshooting for common issues.

Logging

This library uses the standard logging library for logging. Basic information about HTTP sessions (URLs, headers, etc.) is logged at INFO level.

Detailed DEBUG level logging, including request/response bodies and unredacted headers, can be enabled on a client with the logging_enable argument.

See full SDK logging documentation with examples here.

Next steps

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Release History

1.0.1 (2024-11-15)

Bugs Fixed

  • Fixed [remote] extra to be needed only when tracking results in Azure AI Studio.
  • Removing azure-ai-inference as dependency.

1.0.0 (2024-11-13)

Breaking Changes

  • The parallel parameter has been removed from composite evaluators: QAEvaluator, ContentSafetyChatEvaluator, and ContentSafetyMultimodalEvaluator. To control evaluator parallelism, you can now use the _parallel keyword argument, though please note that this private parameter may change in the future.
  • Parameters query_response_generating_prompty_kwargs and user_simulator_prompty_kwargs have been renamed to query_response_generating_prompty_options and user_simulator_prompty_options in the Simulator's call method.

Bugs Fixed

  • Fixed an issue where the output_path parameter in the evaluate API did not support relative path.
  • Output of adversarial simulators are of type JsonLineList and the helper function to_eval_qr_json_lines now outputs context from both user and assistant turns along with category if it exists in the conversation
  • Fixed an issue where during long-running simulations, API token expires causing "Forbidden" error. Instead, users can now set an environment variable AZURE_TOKEN_REFRESH_INTERVAL to refresh the token more frequently to prevent expiration and ensure continuous operation of the simulation.
  • Fix evaluate function not producing aggregated metrics if ANY values to be aggregated were None, NaN, or otherwise difficult to process. Such values are ignored fully, so the aggregated metric of [1, 2, 3, NaN] would be 2, not 1.5.

Other Changes

  • Refined error messages for serviced-based evaluators and simulators.
  • Tracing has been disabled due to Cosmos DB initialization issue.
  • Introduced environment variable AI_EVALS_DISABLE_EXPERIMENTAL_WARNING to disable the warning message for experimental features.
  • Changed the randomization pattern for AdversarialSimulator such that there is an almost equal number of Adversarial harm categories (e.g. Hate + Unfairness, Self-Harm, Violence, Sex) represented in the AdversarialSimulator outputs. Previously, for 200 max_simulation_results a user might see 140 results belonging to the 'Hate + Unfairness' category and 40 results belonging to the 'Self-Harm' category. Now, user will see 50 results for each of Hate + Unfairness, Self-Harm, Violence, and Sex.
  • For the DirectAttackSimulator, the prompt templates used to generate simulated outputs for each Adversarial harm category will no longer be in a randomized order by default. To override this behavior, pass randomize_order=True when you call the DirectAttackSimulator, for example:
adversarial_simulator = DirectAttackSimulator(azure_ai_project=azure_ai_project, credential=DefaultAzureCredential())
outputs = asyncio.run(
    adversarial_simulator(
        scenario=scenario,
        target=callback,
        randomize_order=True
    )
)

1.0.0b5 (2024-10-28)

Features Added

  • Added GroundednessProEvaluator, which is a service-based evaluator for determining response groundedness.
  • Groundedness detection in Non Adversarial Simulator via query/context pairs
import importlib.resources as pkg_resources
package = "azure.ai.evaluation.simulator._data_sources"
resource_name = "grounding.json"
custom_simulator = Simulator(model_config=model_config)
conversation_turns = []
with pkg_resources.path(package, resource_name) as grounding_file:
    with open(grounding_file, "r") as file:
        data = json.load(file)
for item in data:
    conversation_turns.append([item])
outputs = asyncio.run(custom_simulator(
    target=callback,
    conversation_turns=conversation_turns,
    max_conversation_turns=1,
))
  • Adding evaluator for multimodal use cases

Breaking Changes

  • Renamed environment variable PF_EVALS_BATCH_USE_ASYNC to AI_EVALS_BATCH_USE_ASYNC.
  • RetrievalEvaluator now requires a context input in addition to query in single-turn evaluation.
  • RelevanceEvaluator no longer takes context as an input. It now only takes query and response in single-turn evaluation.
  • FluencyEvaluator no longer takes query as an input. It now only takes response in single-turn evaluation.
  • AdversarialScenario enum does not include ADVERSARIAL_INDIRECT_JAILBREAK, invoking IndirectJailbreak or XPIA should be done with IndirectAttackSimulator
  • Outputs of Simulator and AdversarialSimulator previously had to_eval_qa_json_lines and now has to_eval_qr_json_lines. Where to_eval_qa_json_lines had:
{"question": <user_message>, "answer": <assistant_message>}

to_eval_qr_json_lines now has:

{"query": <user_message>, "response": assistant_message}

Bugs Fixed

  • Non adversarial simulator works with gpt-4o models using the json_schema response format
  • Fixed an issue where the evaluate API would fail with "[WinError 32] The process cannot access the file because it is being used by another process" when venv folder and target function file are in the same directory.
  • Fix evaluate API failure when trace.destination is set to none
  • Non adversarial simulator now accepts context from the callback

Other Changes

  • Improved error messages for the evaluate API by enhancing the validation of input parameters. This update provides more detailed and actionable error descriptions.

  • GroundednessEvaluator now supports query as an optional input in single-turn evaluation. If query is provided, a different prompt template will be used for the evaluation.

  • To align with our support of a diverse set of models, the following evaluators will now have a new key in their result output without the gpt_ prefix. To maintain backwards compatibility, the old key with the gpt_ prefix will still be present in the output; however, it is recommended to use the new key moving forward as the old key will be deprecated in the future.

    • CoherenceEvaluator
    • RelevanceEvaluator
    • FluencyEvaluator
    • GroundednessEvaluator
    • SimilarityEvaluator
    • RetrievalEvaluator
  • The following evaluators will now have a new key in their result output including LLM reasoning behind the score. The new key will follow the pattern "<metric_name>_reason". The reasoning is the result of a more detailed prompt template being used to generate the LLM response. Note that this requires the maximum number of tokens used to run these evaluators to be increased.

    Evaluator New max_token for Generation
    CoherenceEvaluator 800
    RelevanceEvaluator 800
    FluencyEvaluator 800
    GroundednessEvaluator 800
    RetrievalEvaluator 1600
  • Improved the error message for storage access permission issues to provide clearer guidance for users.

1.0.0b4 (2024-10-16)

Breaking Changes

  • Removed numpy dependency. All NaN values returned by the SDK have been changed to from numpy.nan to math.nan.
  • credential is now required to be passed in for all content safety evaluators and ProtectedMaterialsEvaluator. DefaultAzureCredential will no longer be chosen if a credential is not passed.
  • Changed package extra name from "pf-azure" to "remote".

Bugs Fixed

  • Adversarial Conversation simulations would fail with Forbidden. Added logic to re-fetch token in the exponential retry logic to retrive RAI Service response.
  • Fixed an issue where the Evaluate API did not fail due to missing inputs when the target did not return columns required by the evaluators.

Other Changes

  • Enhance the error message to provide clearer instruction when required packages for the remote tracking feature are missing.
  • Print the per-evaluator run summary at the end of the Evaluate API call to make troubleshooting row-level failures easier.

1.0.0b3 (2024-10-01)

Features Added

  • Added type field to AzureOpenAIModelConfiguration and OpenAIModelConfiguration
  • The following evaluators now support conversation as an alternative input to their usual single-turn inputs:
    • ViolenceEvaluator
    • SexualEvaluator
    • SelfHarmEvaluator
    • HateUnfairnessEvaluator
    • ProtectedMaterialEvaluator
    • IndirectAttackEvaluator
    • CoherenceEvaluator
    • RelevanceEvaluator
    • FluencyEvaluator
    • GroundednessEvaluator
  • Surfaced RetrievalScoreEvaluator, formally an internal part of ChatEvaluator as a standalone conversation-only evaluator.

Breaking Changes

  • Removed ContentSafetyChatEvaluator and ChatEvaluator
  • The evaluator_config parameter of evaluate now maps in evaluator name to a dictionary EvaluatorConfig, which is a TypedDict. The column_mapping between data or target and evaluator field names should now be specified inside this new dictionary:

Before:

evaluate(
    ...,
    evaluator_config={
        "hate_unfairness": {
            "query": "${data.question}",
            "response": "${data.answer}",
        }
    },
    ...
)

After

evaluate(
    ...,
    evaluator_config={
        "hate_unfairness": {
            "column_mapping": {
                "query": "${data.question}",
                "response": "${data.answer}",
             }
        }
    },
    ...
)
  • Simulator now requires a model configuration to call the prompty instead of an Azure AI project scope. This enables the usage of simulator with Entra ID based auth. Before:
azure_ai_project = {
    "subscription_id": os.environ.get("AZURE_SUBSCRIPTION_ID"),
    "resource_group_name": os.environ.get("RESOURCE_GROUP"),
    "project_name": os.environ.get("PROJECT_NAME"),
}
sim = Simulator(azure_ai_project=azure_ai_project, credentails=DefaultAzureCredentials())

After:

model_config = {
    "azure_endpoint": os.environ.get("AZURE_OPENAI_ENDPOINT"),
    "azure_deployment": os.environ.get("AZURE_DEPLOYMENT"),
}
sim = Simulator(model_config=model_config)

If api_key is not included in the model_config, the prompty runtime in promptflow-core will pick up DefaultAzureCredential.

Bugs Fixed

  • Fixed issue where Entra ID authentication was not working with AzureOpenAIModelConfiguration

1.0.0b2 (2024-09-24)

Breaking Changes

  • data and evaluators are now required keywords in evaluate.

1.0.0b1 (2024-09-20)

Breaking Changes

  • The synthetic namespace has been renamed to simulator, and sub-namespaces under this module have been removed
  • The evaluate and evaluators namespaces have been removed, and everything previously exposed in those modules has been added to the root namespace azure.ai.evaluation
  • The parameter name project_scope in content safety evaluators have been renamed to azure_ai_project for consistency with evaluate API and simulators.
  • Model configurations classes are now of type TypedDict and are exposed in the azure.ai.evaluation module instead of coming from promptflow.core.
  • Updated the parameter names for question and answer in built-in evaluators to more generic terms: query and response.

Features Added

  • First preview
  • This package is port of promptflow-evals. New features will be added only to this package moving forward.
  • Added a TypedDict for AzureAIProject that allows for better intellisense and type checking when passing in project information

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

azure_ai_evaluation-1.0.1.tar.gz (570.5 kB view details)

Uploaded Source

Built Distribution

azure_ai_evaluation-1.0.1-py3-none-any.whl (573.3 kB view details)

Uploaded Python 3

File details

Details for the file azure_ai_evaluation-1.0.1.tar.gz.

File metadata

  • Download URL: azure_ai_evaluation-1.0.1.tar.gz
  • Upload date:
  • Size: 570.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: RestSharp/106.13.0.0

File hashes

Hashes for azure_ai_evaluation-1.0.1.tar.gz
Algorithm Hash digest
SHA256 07399b3b1fd7dd4d9b810ef09f31972231b3f07d6dd506793b7de6ed0ffa5b53
MD5 4bab9a8ecd6287f80023c74b2a18e491
BLAKE2b-256 ed1485f137f03f2eb9da1a0c5119429223e734f204cc4e8a9d382b6e01286974

See more details on using hashes here.

File details

Details for the file azure_ai_evaluation-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for azure_ai_evaluation-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 bb766aa5302ef5af068fc85a9f927310d931e6d3f2aa67b753f6aba6be92aa80
MD5 dd7d5d9ad19e87ab051a378852783c8b
BLAKE2b-256 f0153a144b1791ea2ac299f7d87c2f3af1548f0e9c7d9f0019fa8c1afb6c70f3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page