Skip to main content

Library for extracting and analyzing persona vectors

Project description

Persona Vectors

Docs

Extract persona-aligned activation vectors from language models and analyze how persona prompts move hidden states.

[!WARNING] This is very experimental currently 🚨

Overview

Given a set of personas and evaluation questions, this project:

  1. Formats each persona as a system prompt (short templated or long biography)
  2. Extracts hidden states at each layer with configurable token masking
  3. Averages masked hidden states across QA pairs and saves one persona-level vector per layer

The resulting vectors can be compared across layers (cosine similarity) and eventually used for steering experiments.

Repository Layout

persona-vectors/
├── notebooks/
│   ├── notebook_extract.py      # Extraction pipeline (primary working script)
│   ├── notebook_compare.py      # Load saved activations and compare variants
│   └── notebook_steer.py        # Steering experiments
├── src/persona_vectors/
│   ├── activations.py           # Core extraction helpers
│   ├── analysis.py              # PCA / UMAP projections and scatter plots
│   ├── artifacts.py             # Save/load/query activation artifact helpers
│   ├── plots.py                 # Plotly figures for layer-wise analysis
│   ├── steering.py              # Steering vector computation and application
│   └── parser.py                # CLI argument parsing
├── artifacts/                   # Saved activations (gitignored)
├── docs/                        # Reference documentation
└── main.py                      # CLI entry point

Dataset loading (SynthPersonaDataset) and environment helpers come from the sibling persona-data package.

For local development, uncomment the path source in pyproject.toml and keep persona-data checked out next to this repo.

Installation

uv sync
cp .env.example .env

Python >=3.12 is required.

Quickstart

# Extract activations (run this first)
uv run python -m notebooks.notebook_extract

# Load saved activations / compare variants
uv run python -m notebooks.notebook_compare

# Build interactive persona-mean PCA and similarity plots from saved activations
uv run python main.py analyze --model google/gemma-2-9b-it --variant biography --mask-strategy answer_mean

# Compute a steering vector from saved activations
uv run python main.py steer --persona-id <UUID> --model google/gemma-2-9b-it --layer 20

Streamlit App

The Streamlit UI lives in the sibling persona-ui repo.

How It Works

Notebooks

notebook_extract.py runs a small end-to-end extraction example:

  1. Load dataset questions and answers
  2. Build masks for the selected token spans
  3. Extract activations and average them across QA pairs
  4. Save the persona-level activation tensor to disk

notebook_compare.py uses ActivationStore to discover saved variants/personas, then compares shared persona means across variants.

notebook_steer.py loads saved activations and computes a steering vector for a selected persona.

Saved Format

Each extraction produces:

artifacts/activations/<model_dir>/<mask_strategy>/<prompt_variant>/
├── manifest.json             # tensor shape, persona names, sample ids
└── <persona_id>.safetensors

<model_dir> is the model name with / replaced by __.

The manifest stores compact sample ids (qa.qid) instead of full question text, plus tensor shape fields used for validation. Each safetensors file contains a single activations tensor with shape (num_layers, hidden_size).

CLI

extract, analyze, and steer are implemented.

# Extract activations
# Defaults to all supported variants: templated and biography.
python main.py extract --model google/gemma-2-2b-it

# Extract only the Assistant baseline
python main.py extract --model google/gemma-2-2b-it --persona-id baseline_assistant

# Pick specific variants
python main.py extract --model google/gemma-2-2b-it --variants biography

# Run remotely on NDIF. If the remote fast path OOMs, extraction automatically
# retries that persona/variant with layer-chunked traces.
python main.py extract --model google/gemma-2-9b-it --backend remote

# Analyze saved activations
python main.py analyze --model google/gemma-2-9b-it --variant biography --mask-strategy answer_mean --out ./plots

# Run steering (example)
python main.py steer --layer 10 --model "google/gemma-2-9b-it" --persona-id 005e1868-4e17-47e3-94fa-0d20e8d93662

# Load steering activations extracted with a non-default mask strategy
python main.py steer --layer 10 --model "google/gemma-2-9b-it" --persona-id <UUID> --mask-strategy answer_previous

Publishing to the Hugging Face Hub

Saved activations can be packaged as a Hugging Face dataset and pushed to the Hub. One config per (model, mask_strategy) pair, with templated / biography as splits. Each row is one persona with a (num_layers, hidden_size) vector.

# One-time: huggingface-cli login (or set HF_TOKEN in .env)
uv run python scripts/push_to_hf.py \
    --model google/gemma-2-9b-it \
    --repo implicit-personalization/synth-persona-vectors

Adding more personas later: re-run extract (skips personas already in the local manifest; pass --force to re-run them), then re-run the push script.

Loading the dataset elsewhere:

from datasets import load_dataset

ds = load_dataset("implicit-personalization/synth-persona-vectors", "google__gemma-2-9b-it__answer_mean", split="biography",)
row = ds.filter(lambda r: r["persona_id"] == "<UUID>")[0]
# row["vector"] is a (num_layers, hidden_size) list[list[float]]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

persona_vectors-0.5.2.tar.gz (22.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

persona_vectors-0.5.2-py3-none-any.whl (26.8 kB view details)

Uploaded Python 3

File details

Details for the file persona_vectors-0.5.2.tar.gz.

File metadata

  • Download URL: persona_vectors-0.5.2.tar.gz
  • Upload date:
  • Size: 22.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.11 {"installer":{"name":"uv","version":"0.11.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for persona_vectors-0.5.2.tar.gz
Algorithm Hash digest
SHA256 4da69bccf03894e75634bf0feb2336555b1e898de28e029a2bfb98b06a44a1cd
MD5 edcfa7ca9f5d877c54d8eee24a1461f2
BLAKE2b-256 714d20344f8718038bba8d8304267912c7fb52d154d2d9d13a900f6f9a73fc8f

See more details on using hashes here.

File details

Details for the file persona_vectors-0.5.2-py3-none-any.whl.

File metadata

  • Download URL: persona_vectors-0.5.2-py3-none-any.whl
  • Upload date:
  • Size: 26.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.11 {"installer":{"name":"uv","version":"0.11.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for persona_vectors-0.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 54b17a60acf1e1787f3eeea6c88509d59f08e27fa49871793444d2480767f0f3
MD5 57a1f5138dbd6989045d9a7630273bc6
BLAKE2b-256 5e12f57291975134dd9313d6ae7bcfdb9917286fe5fd125de35a0bbdd0995e3a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page