Skip to main content

The Official Elluminate SDK

Project description

elluminate SDK

elluminate SDK is a Software Development Kit that provides a convenient way to interact with the elluminate platform programmatically. It enables developers to evaluate and optimize prompts, manage experiments, and integrate elluminate's powerful evaluation capabilities directly into their applications.

Installation

Install the elluminate SDK using pip:

pip install elluminate

📚 Full Documentation

The full documentation of elluminate including the SDK can be found at: https://docs.elluminate.de/

Quick Start

Prerequisites

Before you begin, you'll need to set up your API key:

  1. Visit your project's "Keys" dashboard to create a new API key
  2. Export your API key and service address as environment variables:
export ELLUMINATE_API_KEY=<your_api_key>
export ELLUMINATE_BASE_URL=<your_elluminate_service_address>

Never commit your API key to version control. For detailed information about API key management and security best practices, see our API Key Management Guide.

Basic Usage

Here's a simple example to evaluate your first prompt:

from elluminate import Client

# Initialize the client
client = Client()

# Create a prompt template
template, _ = client.get_or_create_prompt_template(
    name="Concept Explanation",
    messages=[{"role": "user", "content": "Explain the concept of {{concept}} in simple terms."}],
)

# Generate evaluation criteria for the template
template.get_or_generate_criteria()

# Create a collection with test cases
collection, _ = client.get_or_create_collection(
    name="Concept Variables",
    defaults={
        "description": "Template variables for concept explanations",
        "variables": [{"concept": "recursion"}],
    },
)

# Run a complete experiment (generates responses + rates them)
experiment = client.run_experiment(
    name="Concept Evaluation Test",
    prompt_template=template,
    collection=collection,
    description="Evaluating concept explanation responses",
)

# Print results
for response in experiment.responses():
    print(f"Response: {response.response_str}")
    for rating in response.ratings:
        print(f"  Criterion: {rating.criterion.criterion_str}")
        print(f"  Rating: {rating.rating}")

Alternative Client Initialization

You can also initialize the client by directly passing the API key and/or base url:

client = Client(api_key="your-api-key", base_url="your-base-url")

Advanced Features

Batch Evaluation with Experiments

For evaluating prompts across multiple test cases:

from elluminate import Client
from elluminate.schemas import RatingMode

client = Client()

# Create a prompt template
template, _ = client.get_or_create_prompt_template(
    name="Math Teaching Prompt",
    messages=[{"role": "user", "content": "Explain {{math_concept}} to a {{grade_level}} student using simple examples."}],
)

# Generate evaluation criteria
template.get_or_generate_criteria()

# Create a collection with multiple test cases
collection, _ = client.get_or_create_collection(
    name="Math Teaching Test Cases",
    defaults={"description": "Various math concepts and grade levels"},
)

# Add test cases in batch
collection.add_many(
    variables=[
        {"math_concept": "fractions", "grade_level": "5th grade"},
        {"math_concept": "algebra", "grade_level": "8th grade"},
        {"math_concept": "geometry", "grade_level": "6th grade"},
    ]
)

# Run the experiment (handles all response generation and rating)
experiment = client.run_experiment(
    name="Math Teaching Evaluation",
    prompt_template=template,
    collection=collection,
    description="Evaluating math explanations across different concepts and grade levels",
    rating_mode=RatingMode.DETAILED,  # Get reasoning with ratings
)

# Print results for each response
for response in experiment.responses():
    variables = response.prompt.template_variables.input_values
    print(f"\nConcept: {variables['math_concept']}, Grade: {variables['grade_level']}")
    print(f"Response: {response.response_str[:100]}...")

    for rating in response.ratings:
        print(f"  • {rating.criterion.criterion_str}: {rating.rating}")
        if rating.reasoning:
            print(f"    Reasoning: {rating.reasoning}")

Evaluating External Agents

To evaluate responses from external systems (LangChain agents, OpenAI Assistants, custom APIs):

from elluminate import Client
from elluminate.schemas import RatingValue

client = Client()

# Set up template and collection
template, _ = client.get_or_create_prompt_template(
    name="Agent Evaluation",
    messages=[{"role": "user", "content": "Answer: {{question}}"}],
)
template.get_or_generate_criteria()

collection, _ = client.get_or_create_collection(
    name="Agent Test Cases",
    defaults={"variables": [{"question": "What is Python?"}]},
)

# Create experiment WITHOUT auto-generation
experiment = client.create_experiment(
    name="External Agent Eval",
    prompt_template=template,
    collection=collection,
)

# Get responses from your external agent
external_responses = ["Python is a high-level programming language..."]
template_vars = list(collection.items())

# Upload responses and rate them
experiment.add_responses(responses=external_responses, template_variables=template_vars)
experiment.rate_responses()

# Analyze results
for response in experiment.responses():
    passed = sum(1 for r in response.ratings if r.rating == RatingValue.YES)
    print(f"Pass rate: {passed}/{len(response.ratings)}")

Additional Resources

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

elluminate-1.0.0a5.tar.gz (69.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

elluminate-1.0.0a5-py3-none-any.whl (84.5 kB view details)

Uploaded Python 3

File details

Details for the file elluminate-1.0.0a5.tar.gz.

File metadata

  • Download URL: elluminate-1.0.0a5.tar.gz
  • Upload date:
  • Size: 69.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for elluminate-1.0.0a5.tar.gz
Algorithm Hash digest
SHA256 7aec9babea2b1aae2f1af6feeaa1c7ee2a7ec3e8af531788fb02352c7b3909eb
MD5 f03317febb0b4be7cff81869c0f5c5b0
BLAKE2b-256 508a802066d67348d811e07fa0477da9a7d3e655161fde3470e2703807e3bec4

See more details on using hashes here.

File details

Details for the file elluminate-1.0.0a5-py3-none-any.whl.

File metadata

  • Download URL: elluminate-1.0.0a5-py3-none-any.whl
  • Upload date:
  • Size: 84.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for elluminate-1.0.0a5-py3-none-any.whl
Algorithm Hash digest
SHA256 50798050d4ecb4176ababb737208339f2654e189ed7d8890bc6391e47c44b210
MD5 f42946678e83fc1248a8bed9a8148294
BLAKE2b-256 953ae23b0385acd40531923778fd66adaeb252bf0fc088ffed86d7123e563176

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page