Skip to main content

OpenTelemetry GenAI Utils

Project description

This package plugs the deepeval metrics suite into the OpenTelemetry GenAI evaluation pipeline. When it is installed a Deepeval evaluator is registered automatically and, unless explicitly disabled, is executed for every LLM/agent invocation alongside the builtin metrics.

Installation

Install the evaluator (and its runtime dependencies) from PyPI:

pip install opentelemetry-util-genai-evals-deepeval

The command pulls in opentelemetry-util-genai, deepeval and openai automatically so the evaluator is ready to use right after installation.

Requirements

  • opentelemetry-util-genai together with deepeval and openai – these are installed automatically when you install this package.

  • An LLM provider supported by Deepeval. By default the evaluator uses OpenAI’s gpt-4o-mini model because it offers the best balance of latency and cost for judge workloads right now, so make sure OPENAI_API_KEY is available. To override the model, set DEEPEVAL_EVALUATION_MODEL (or DEEPEVAL_MODEL / OPENAI_MODEL) to a different deployment along with the corresponding provider credentials.

  • (Optional) DEEPEVAL_API_KEY if your Deepeval account requires it.

Configuration

Use OTEL_INSTRUMENTATION_GENAI_EVALS_EVALUATORS to select the metrics that should run. Leaving the variable unset enables every registered evaluator with its default metric set. Examples:

  • OTEL_INSTRUMENTATION_GENAI_EVALS_EVALUATORS=Deepeval – run the default Deepeval bundle (Bias, Toxicity, Answer Relevancy, Faithfulness).

  • OTEL_INSTRUMENTATION_GENAI_EVALS_EVALUATORS=Deepeval(LLMInvocation(bias(threshold=0.75))) – override the Bias threshold for LLM invocations and skip the remaining metrics.

  • OTEL_INSTRUMENTATION_GENAI_EVALS_EVALUATORS=none – disable the evaluator entirely.

Results are emitted through the standard GenAI evaluation emitters (events, metrics, spans). Each metric includes helper attributes such as deepeval.success, deepeval.threshold and any evaluation model metadata returned by Deepeval. Metrics that cannot run because required inputs are missing (for example Faithfulness without a retrieval_context) are marked as label="skipped" and carry a deepeval.error attribute so you can wire the necessary data or disable that metric explicitly.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

splunk_otel_genai_evals_deepeval-0.1.4.tar.gz (18.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file splunk_otel_genai_evals_deepeval-0.1.4.tar.gz.

File metadata

File hashes

Hashes for splunk_otel_genai_evals_deepeval-0.1.4.tar.gz
Algorithm Hash digest
SHA256 0523ee7683ea14bba0ad655cace8e1a277cd5f84cd39487d9b709b6a5ae73785
MD5 d5ebb3d11bc4d5baa45821e6f16ae1b2
BLAKE2b-256 f60b7a725f7c515947a5dfa83a0e903b8ff4c5f9e7e1889459289f54701c3f64

See more details on using hashes here.

File details

Details for the file splunk_otel_genai_evals_deepeval-0.1.4-py3-none-any.whl.

File metadata

File hashes

Hashes for splunk_otel_genai_evals_deepeval-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 56964b240f31ad18d9a4ccf4e38293b2c5372111c3f03d88850db26e6943ba0e
MD5 c0a74bf21e4adf88c1cc22dc8ef7f896
BLAKE2b-256 8fe1c496db38d67082fd405b0f9b2d7bf324ae0205894425687061f3a2a3f20b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page