Skip to main content

LangFair is a Python library for conducting use-case level LLM bias and fairness assessments

Project description

Library for Assessing Bias and Fairness in LLMs

LangFair is a Python library for conducting bias and fairness assessments of LLM use cases. This repository includes a framework for choosing bias and fairness metrics, demo notebooks, and a LLM bias and fairness technical playbook containing a thorough discussion of LLM bias and fairness risks, evaluation metrics, and best practices. Please refer to our documentation site for more details on how to use LangFair.

Bias and fairness metrics offered by LangFair fall into one of several categories. The full suite of metrics is displayed below.

Counterfactual Fairness Metrics
Stereotype Metrics
Toxicity Metrics
Recommendation Fairness Metrics
Classification Fairness Metrics

Quickstart

(Optional) Create a virtual environment for using LangFair

We recommend creating a new virtual environment using venv before installing LangFair. To do so, please follow instructions here.

Installing LangFair

The latest version can be installed from PyPI:

pip install langfair

Usage

Below is a sample of code illustrating how to use LangFair's AutoEval class for text generation and summarization use cases. The below example assumes the user has already defined parameters DEPLOYMENT_NAME, API_KEY, API_BASE, API_TYPE, API_VERSION, and a list of prompts from their use case prompts.

Create langchain LLM object.

from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
    deployment_name=DEPLOYMENT_NAME,
    openai_api_key=API_KEY,
    azure_endpoint=API_BASE,
    openai_api_type=API_TYPE,
    openai_api_version=API_VERSION,
    temperature=0.4 # User to set temperature
)

Run the AutoEval method for automated bias / fairness evaluation

from langfair.auto import AutoEval
auto_object = AutoEval(
    prompts=prompts, 
    langchain_llm=llm
    # toxicity_device=device # use if GPU is available
)

results = await auto_object.evaluate()

Print the results and export to .txt file.

auto_object.export_results(file_name="metric_values.txt")
auto_object.print_results()

Example Notebooks

See Demo Notebooks for notebooks illustrating how to use LangFair for various bias and fairness evaluation metrics.

Choosing Bias and Fairness Metrics for an LLM Use Case

In general, bias and fairness assessments of LLM use cases do not require satisfying all possible evaluation metrics. Instead, practitioners should prioritize and concentrate on a relevant subset of metrics. To demystify metric choice for bias and fairness assessments of LLM use cases, we introduce a decision framework for selecting the appropriate evaluation metrics, as depicted in the diagram below. Leveraging the use case taxonomy outlined in the technical playbook, we determine suitable choices of bias and fairness metrics for a given use case based on its relevant characteristics.

In cases where prompts are derived from a well-defined population, we categorize the use cases into three distinct groups: 1) text generation and summarization, 2) classification, and 3) recommendation. Within each category, we carefully choose metrics to assess potential bias and fairness risks that are pertinent to the specific characteristics of the use case. The specifics of this mapping are elaborated upon below.

When considering text generation and summarization, an important factor in determining relevant bias and fairness metrics is whether the use case upholds FTUA. This means that all inputs provided to the model do not include any mentions of protected attribute words. If FTUA cannot be achieved, we recommend evaluating bias using counterfactual discrimination and stereotype metrics. We recommend that all text generation and summarization use cases undergo toxicity evaluation, regardless of whether or not FTUA is achieved. For illustrations of how to calculate these metrics, refer to the toxicity, stereotype, and counterfactual demo notebooks or the AutoEval demo notebook for a more automated implementation.

To determine suitable bias and fairness metrics for text classification use cases, we adopt a modified version of the straightforward decision framework proposed by Aequitas. This framework can be applied to any classification use case where inputs can be attributed to a protected attribute. In such cases, the following approach is recommended: if fairness necessitates that model predictions exhibit approximately equal predicted prevalence across different groups, representation fairness metrics should be used; Otherwise, error-based fairness metrics should be used. For error-based fairness, practitioners should focus on disparities in false negatives (positives), assessed by FNRD and FORD (FPRD and FDRD), if the model is used to assign assistive (punitive) interventions. In the context of fairness, if interventions are punitive, and hence can hurt individuals, it is undesirable for a model to produce false positives disproportionately for any protected attribute group. Analogously, having a model produce false negatives disproportionately for any protected attribute group is undesirable in the case of assistive interventions. If inputs cannot be mapped to a protected attribute, meaning they are not person-level inputs and they satisfy FTUA, then we argue that no fairness assessment is needed. For such classification fairness assessments, use case owners should refer to the classification metrics demo notebook.

For recommendation use cases, we argue that counterfactual discrimination is a risk if FTUA cannot be satisfied, as shown by Zhang et al., 2023. For these scenarios, we recommend practitioners assess counterfactual fairness in recommendations using the recommendation fairness metrics. However, if FTUA is satisfied, then we argue no fairness assessment is needed. For illustrations of how to calculte these metrics, refer to the recommendation metrics demo notebook.

Lastly, we classify the remaining subset of focused use cases as having minimal risks in terms of bias and fairness. These use cases include tasks such as code-writing, personal learning, and any other scenarios where the use case owner manually evaluates each generated output. As these use cases inherently involve human-in-the-loop feedback, we argue that any concerns regarding bias or fairness can be adequately addressed by the use case owner who is reviewing the outputs.

Associated Research

A technical description of LangFair's evaluation metrics and a practitioner's guide for selecting evaluation metrics is contained in this paper. Below is the bibtex entry for this paper:

@misc{bouchard2024actionableframeworkassessingbias, title={An Actionable Framework for Assessing Bias and Fairness in Large Language Model Use Cases}, author={Dylan Bouchard}, year={2024}, eprint={2407.10853}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2407.10853}, }

Code Documentation

Please refer to our documentation site for more details on how to use LangFair.

Development Team

The open-source version of LangFair is the culmination of extensive work carried out by a dedicated team of developers. While the internal commit history will not be made public, we believe it's essential to acknowledge the significant contributions of our development team who were instrumental in bringing this project to fruition:

Contributing

Contributions are welcome. Please refer here for instructions on how to contribute to LangFair.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langfair-0.1.0a1.tar.gz (50.6 kB view details)

Uploaded Source

Built Distribution

langfair-0.1.0a1-py3-none-any.whl (88.8 kB view details)

Uploaded Python 3

File details

Details for the file langfair-0.1.0a1.tar.gz.

File metadata

  • Download URL: langfair-0.1.0a1.tar.gz
  • Upload date:
  • Size: 50.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.10.15 Linux/5.10.0-33-cloud-amd64

File hashes

Hashes for langfair-0.1.0a1.tar.gz
Algorithm Hash digest
SHA256 0c9c52f114209b66e9a8b458517053aaf172838c5fd937f2207adf4a23b62fad
MD5 c840861c5f9d5e65b3159fc339f4ae3c
BLAKE2b-256 dee4664c0d19b234ca3250a1d84c3f74031a2dee5699635670bfc91598a34d5d

See more details on using hashes here.

File details

Details for the file langfair-0.1.0a1-py3-none-any.whl.

File metadata

  • Download URL: langfair-0.1.0a1-py3-none-any.whl
  • Upload date:
  • Size: 88.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.10.15 Linux/5.10.0-33-cloud-amd64

File hashes

Hashes for langfair-0.1.0a1-py3-none-any.whl
Algorithm Hash digest
SHA256 19c065e28fb83979f70bb5e9177623a9038a66097327cf2e88e1e6b50dd8cf96
MD5 ea44358440ef42d08811da7e6a228927
BLAKE2b-256 a3dfcb32c398e1fac5f57e12315a356f759d91dcceffc73cf159ec267ab390d4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page