Skip to main content

Pluristic alignment evaluation benchmark for LLMs

Project description

PERSONA Bench

Reproducible Testbed for Evaluating and Improving Language Model Alignment with Diverse User Values

SynthLabs.ai/research/persona | Try PERSONA Bench Online

PERSONA

GitHub Repository PyPI version
Documentation Contributor Guide License
arXiv SynthLabs Stanford AI Lab Discord Twitter Follow

📄 Paper | 🗃️ Research Visualizations | 🤗 Hugging Face | Online Personalization Toolkit |

🌐 SynthLabs Research | 👥 Join the Team | 🤝 Let's Collaborate

PERSONA Bench is an extension of the PERSONA framework introduced in Castricato et al. 2024. It provides a reproducible testbed for evaluating and improving the alignment of language models with diverse user values.

Introduction

PERSONA established a strong correlation between human judges and language models in persona-based personalization tasks. Building on this foundation, we've developed a suite of robust evaluations to test a model's ability to perform personalization-related tasks. This repository provides practitioners with tools to assess and improve the pluralistic alignment of their language models.

There are two intended methodologies to use PERSONA Bench:

  1. Via the API: This method provides easy integration and evaluation of your models, including a novel "comparison" evaluation type. For detailed instructions, see the PERSONA API section below. To get started with the API you can create an account and try the testbed here.

  2. Via InspectAI: This method allows you to run evaluations using the InspectAI framework, which provides additional visualization tools. For instructions on running with InspectAI, refer to the Running with InspectAI section.

Both methods offer comprehensive evaluation capabilities, but the API method is generally more straightforward for most users and includes the exclusive "comparison" evaluation type.

Key Features

  • 🎭 Main Evaluation: Assess personalized response generation
  • 🧩 Leave One Out Analysis: Measure attribute impact on performance
  • 🌐 Intersectionality: Evaluate model performance across different demographic intersections
  • 🎯 Pass@K: Determine attempts needed for successful personalization
  • 🔍 Comparison: Grounded personalization evaluation (API-exclusive)

Quick Start

  1. Install Poetry if you haven't already:

    curl -sSL https://install.python-poetry.org | python3 -
    
  2. Install the package:

    poetry add persona-bench
    
  3. Use in your Python script:

    from dotenv import load_dotenv
    from persona_bench import evaluate_model
    
    # optional, you can also pass the environment variables directly to evaluate_model
    load_dotenv()
    
    eval = evaluate_model("gpt-3.5-turbo", evaluation_type="main")
    print(eval.results.model_dump())
    

PERSONA API

PERSONA Bench now offers an API for easy integration and evaluation of your models. The API provides access to all evaluation types available in PERSONA Bench, including a novel evaluation type called "comparison" for grounded personalization evaluation.

Quick Start with API

  1. Install the package:

    pip install persona-bench
    
  2. Set up your API key:

  3. Use in your Python script:

    from persona_bench.api import PERSONAClient
    from persona_bench.api.prompt_constructor import ChainOfThoughtPromptConstructor
    
    # Create a PERSONAClient object
    client = PERSONAClient(
        model_str="your_model_name",
        evaluation_type="comparison", # Run a grounded evaluation, API exclusive!
        N=50,
        prompt_constructor=ChainOfThoughtPromptConstructor(),
        # If not set as an environment variable, pass the API key here:
        # api_key="your_api_key_here"
    )
    
    # Iterate through questions and log answers
    for idx, q in enumerate(client):
        answer = your_model_function(q["system"], q["user"])
        client.log_answer(idx, answer)
    
    # Evaluate the results
    results = client.evaluate(drop_answer_none=True)
    print(results)
    

Key Features

  • 🎭 Multiple Evaluation Types: Support for grounded, main, LOO, intersectionality, and pass@k evaluations
  • 🔧 Customizable Prompt Construction: Use default or custom prompt constructors
  • 📊 Easy Data Handling: Iterate through questions and log answers seamlessly
  • 📈 Evaluation: Evaluate model performance with a single method call

Detailed Usage

Initialization

Create a PERSONAClient object with the following parameters:

  • model_str: The identifier for this evaluation task
  • evaluation_type: Type of evaluation ("main", "loo", "intersectionality", "pass_at_k", "comparison")
  • N: Number of samples for evaluation
  • prompt_constructor: Custom prompt constructor (optional)
  • intersection: List of intersection attributes (required for intersectionality evaluation)
  • loo_attributes: Leave-one-out attributes (required for LOO evaluation)
  • seed: Random seed for reproducibility (optional)
  • url: API endpoint URL (optional, default is "https://synth-api-development.eastus.azurecontainer.io/api/v1/personas/v1/")
  • api_key: Your SYNTH API key (optional if set as an environment variable)

Iterating Through Questions

Use the client as an iterable to access questions:

for idx, question in enumerate(client):
    system_prompt = question["system"]
    user_prompt = question["user"]
    answer = your_model_function(system_prompt, user_prompt)
    client.log_answer(idx, answer)

Evaluation

Evaluate the logged answers:

results = client.evaluate(drop_answer_none=True, save_scores=False)

Advanced Usage

Custom Prompt Constructors

Create a custom prompt constructor by inheriting from BasePromptConstructor:

from persona_bench.api.prompt_constructor import BasePromptConstructor

class MyCustomPromptConstructor(BasePromptConstructor):
    def construct_prompt(self, persona, question):
        # Implement your custom prompt construction logic
        pass

client = PERSONAClient(
    # ... other parameters ...
    prompt_constructor=MyCustomPromptConstructor(),
)

Accessing Raw Data

Access the underlying data using indexing:

question = client[0]  # Get the first question

answers = [generate_answer(q) for q in client]
client.set_answers(answers)

Evaluation Types

Comparison Evaluation (API-exclusive)

The comparison evaluation is our most advanced and grounded assessment, exclusively available through the PERSONA API. It provides a robust measure of a model's personalization capabilities using known gold truth answers.

Click to expand details
  • Uses carefully curated persona pairs with known distinctions
  • Presents models with questions that have objectively different answers for each persona
  • Evaluates the model's ability to generate persona-appropriate responses
  • Compares model outputs against gold truth answers for precise accuracy measurement
  • Offers the most reliable and interpretable results among all evaluation types

Example usage:

from persona_bench.api import PERSONAClient
client = PERSONAClient(model_str="your_identifier_name", evaluation_type="comparison", N=50)

Development Setup

  1. Clone the repository:

    git clone https://github.com/SynthLabsAI/PERSONA-bench.git
    cd PERSONA-bench
    
  2. Install dependencies:

    poetry install
    
  3. Install pre-commit hooks:

    poetry run pre-commit install
    
  4. Set up HuggingFace authentication:

    huggingface-cli login
    
  5. Set up environment variables:

    cp .env.example .env
    vim .env
    

Detailed Evaluations

Main Evaluation

The main evaluation script assesses a model's ability to generate personalized responses based on given personas from our custom filtered PRISM dataset.

Click to expand details
  1. Load PRISM dataset
  2. Generate utterances using target model with random personas
  3. Evaluate using GPT-4 as a critic model via a debate approach
  4. Analyze personalization effectiveness

Leave One Out Analysis

This evaluation measures the impact of individual attributes on personalization performance.

Click to expand details
  • Uses sub-personas separated by LOO attributes
  • Tests on multiple personas and PRISM questions
  • Analyzes feature importance

Available attributes include age, sex, race, education, employment status, and many more. See the leave one out example json for formatting.

The available attributes are

[
  "age",
  "sex",
  "race",
  "ancestry",
  "household language",
  "education",
  "employment status",
  "class of worker",
  "industry category",
  "occupation category",
  "detailed job description",
  "income",
  "marital status",
  "household type",
  "family presence and age",
  "place of birth",
  "citizenship",
  "veteran status",
  "disability",
  "health insurance",
  "big five scores",
  "defining quirks",
  "mannerisms",
  "personal time",
  "lifestyle",
  "ideology",
  "political views",
  "religion",
  "cognitive difficulty",
  "ability to speak english",
  "vision difficulty",
  "fertility",
  "hearing difficulty"
]

Example usage is:

from dotenv import load_dotenv
from persona_bench import evaluate_model

# optional, you can also pass the environment variables directly to evaluate_model
# make sure that your .env file specifies where the loo_json is!
load_dotenv()

eval = evaluate_model("gpt-3.5-turbo", evaluation_type="loo")
print(eval.results.model_dump())

Intersectionality

Evaluate model performance across different demographic intersections.

Click to expand details
  • Define intersections using JSON configuration
  • Measure personalization across disjoint populations
  • Analyze model performance for specific demographic combinations

See the intersectionality example json.

This configuration defines two intersections:

Males aged 18-34 Females aged 18-34

You can use any of the attributes available in the LOO evaluation to create intersections. For attributes with non-enumerable values (e.g., textual background information), you may need to modify the intersection script to use language model embeddings for computing subpopulations.

Pass@K

Determines how many attempts are required to successfully personalize for a given persona.

Click to expand details
  • Reruns main evaluation K times
  • Counts attempts needed for successful personalization
  • Provides insights into model consistency and reliability

WARNING! Pass@K is very credit intensive and may require multiple hours to complete a large run.

Running with InspectAI

Configure your .env file before running the scripts. You can set the generate mode to one of the following:

  • baseline: Generate an answer directly, not given the persona
  • output_only: Generate answer given the persona, without chain of thought
  • chain_of_thought: Generate chain of thought before answering, given the persona
  • demographic_summary: Generate a summary of the persona before answering
# Activate the poetry environment
poetry shell

# Main Evaluation
inspect eval src/persona_bench/main_evaluation.py --model {model}

# Leave One Out Analysis
inspect eval src/persona_bench/main_loo.py --model {model}

# Intersectionality Evaluation
inspect eval src/persona_bench/main_intersectionality.py --model {model}

# Pass@K Evaluation
inspect eval src/persona_bench/main_pass_at_k.py --model {model}

Using Inspect AI allows you to utilize their visualization tooling, which is found in their documentation here.

Visualization

We provide scripts for visualizing evaluation results:

  • visualization_loo.py: Leave One Out analysis
  • visualization_intersection.py: Intersectionality evaluation
  • visualization_pass_at_k.py: Pass@K evaluation

These scripts use the most recent log file by default. Use the --log parameter to specify a different log file. Local visualization is only supported by the inspect-ai backend. Visualization is also available on our webportal for API users here.

Dependencies

Key dependencies include:

  • inspect-ai
  • datasets
  • pandas
  • openai
  • instructor
  • seaborn

For development:

  • tiktoken
  • transformers

See pyproject.toml for a complete list of dependencies.

Citation

If you use PERSONA in your research, please cite our paper:

@misc{castricato2024personareproducibletestbedpluralistic,
      title={PERSONA: A Reproducible Testbed for Pluralistic Alignment},
      author={Louis Castricato and Nathan Lile and Rafael Rafailov and Jan-Philipp Fränken and Chelsea Finn},
      year={2024},
      eprint={2407.17387},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.17387},
}

Community & Support

Join our Discord community for discussions, support, and updates or reach out to us at https://www.synthlabs.ai/contact.

Acknowledgements

This research is supported by SynthLabs. We thank our collaborators and the open-source community for their valuable contributions.


Copyright © 2024, SynthLabs. Released under the Apache License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

persona_bench-0.1.2.1.tar.gz (36.6 kB view details)

Uploaded Source

Built Distribution

persona_bench-0.1.2.1-py3-none-any.whl (48.0 kB view details)

Uploaded Python 3

File details

Details for the file persona_bench-0.1.2.1.tar.gz.

File metadata

  • Download URL: persona_bench-0.1.2.1.tar.gz
  • Upload date:
  • Size: 36.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.15 Linux/6.5.0-1025-azure

File hashes

Hashes for persona_bench-0.1.2.1.tar.gz
Algorithm Hash digest
SHA256 98dd2c28c9a3149957fe84531440077cd353048047c97d1a42616a749cff8631
MD5 ffb368fe089262849226f3d1a4f6b356
BLAKE2b-256 5aa0bfe046ac081aff17a022764a0e6c647c52f8c8ff8258349a989c618be7cf

See more details on using hashes here.

File details

Details for the file persona_bench-0.1.2.1-py3-none-any.whl.

File metadata

  • Download URL: persona_bench-0.1.2.1-py3-none-any.whl
  • Upload date:
  • Size: 48.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.15 Linux/6.5.0-1025-azure

File hashes

Hashes for persona_bench-0.1.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e4427a93c277594e3da93ba8dcd80ca1998aafa980c8fe819b9fda1b02699511
MD5 b595b724f1e729a0a01640d073983ee8
BLAKE2b-256 7521b8e639780641a1db93d4ff438f12a40d6173479e2cbba4428ea09e5892ed

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page