Skip to main content

Keeping language models honest by directly eliciting knowledge encoded in their activations

Project description

Introduction

WIP: This codebase is under active development

Because language models are trained to predict the next token in naturally occurring text, they often reproduce common human errors and misconceptions, even when they "know better" in some sense. More worryingly, when models are trained to generate text that's rated highly by humans, they may learn to output false statements that human evaluators can't detect. We aim to circumvent this issue by directly **eliciting latent knowledge ** (ELK) inside the activations of a language model.

Specifically, we're building on the Contrastive Representation Clustering (CRC) method described in the paper Discovering Latent Knowledge in Language Models Without Supervision by Burns et al. (2022). In CRC, we search for features in the hidden states of a language model which satisfy certain logical consistency requirements. It turns out that these features are often useful for question-answering and text classification tasks, even though the features are trained without labels.

Quick Start

Our code is based on PyTorch and Huggingface Transformers. We test the code on Python 3.10 and 3.11.

First install the package with pip install -e . in the root directory, or pip install -e .[dev] if you'd like to contribute to the project (see Development section below). This should install all the necessary dependencies.

To fit reporters for the HuggingFace model model and dataset dataset, just run:

elk elicit microsoft/deberta-v2-xxlarge-mnli imdb

This will automatically download the model and dataset, run the model and extract the relevant representations if they aren't cached on disk, fit reporters on them, and save the reporter checkpoints to the elk-reporters folder in your home directory. It will also evaluate the reporter classification performance on a held out test set and save it to a CSV file in the same folder.

The following will generate a CCS (Contrast Consistent Search) reporter instead of the CRC-based reporter, which is the default.

elk elicit microsoft/deberta-v2-xxlarge-mnli imdb --net ccs

The following command will evaluate the probe from the run naughty-northcutt on the hidden states extracted from the model deberta-v2-xxlarge-mnli for the imdb dataset. It will result in an eval.csv and cfg.yaml file, which are stored under a subfolder in elk-reporters/naughty-northcutt/transfer_eval.

elk eval naughty-northcutt microsoft/deberta-v2-xxlarge-mnli imdb

The following runs elicit on the Cartesian product of the listed models and datasets, storing it in a special folder ELK_DIR/sweeps/<memorable_name>. Moreover, --add_pooled adds an additional dataset that pools all of the datasets together. You can also add a --visualize flag to visualize the results of the sweep.

elk sweep --models gpt2-{medium,large,xl} --datasets imdb amazon_polarity --add_pooled

If you just do elk plot, it will plot the results from the most recent sweep. If you want to plot a specific sweep, you can do so with:

elk plot {sweep_name}

Caching

The hidden states resulting from elk elicit are cached as a HuggingFace dataset to avoid having to recompute them every time we want to train a probe. The cache is stored in the same place as all other HuggingFace datasets, which is usually ~/.cache/huggingface/datasets.

Development

Use pip install pre-commit && pre-commit install in the root folder before your first commit.

Devcontainer

Open in Remote - Containers

Run tests

pytest

Run type checking

We use pyright, which is built into the VSCode editor. If you'd like to run it as a standalone tool, it requires a nodejs installation.

pyright

Run the linter

We use ruff. It is installed as a pre-commit hook, so you don't have to run it manually. If you want to run it manually, you can do so with:

ruff . --fix

Contributing to this repository

If you work on a new feature / fix or some other code task, make sure to create an issue and assign it to yourself ( Maybe, even share it in the elk channel of Eleuther's Discord with a small note). In this way, others know you are working on the issue and people won't do the same thing twice 👍 Also others can contact you easily.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eleuther-elk-0.1.1.tar.gz (290.3 kB view details)

Uploaded Source

Built Distribution

eleuther_elk-0.1.1-py3-none-any.whl (458.4 kB view details)

Uploaded Python 3

File details

Details for the file eleuther-elk-0.1.1.tar.gz.

File metadata

  • Download URL: eleuther-elk-0.1.1.tar.gz
  • Upload date:
  • Size: 290.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.10

File hashes

Hashes for eleuther-elk-0.1.1.tar.gz
Algorithm Hash digest
SHA256 f15c8aa8312a133900c04400135da159f2969965358f7b71f18a23e4284382f7
MD5 4aa2a9ffedc3bb37334053c8b15d39fc
BLAKE2b-256 68ff241a0ba4bd2bd8f7c18830f6105c1ce4106ecb50061970cd1be1387537bf

See more details on using hashes here.

File details

Details for the file eleuther_elk-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: eleuther_elk-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 458.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.10

File hashes

Hashes for eleuther_elk-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 8522b095453134c9be918f5a1b1a7c671ad00172c70aba5903413d99b97a8659
MD5 ad875c6e416aaaaa09cfdbb3ade8a342
BLAKE2b-256 c24bee3e7c1d3e4e2ed0d876ad4295d4fc45445ed5583772075e75362ff72db2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page