Skip to main content

LLM red-teaming and adversarial testing framework

Project description

vauban

An MLX-native toolkit for understanding and reshaping how language models behave on Apple Silicon.

Named after Sébastien Le Prestre de Vauban — the military engineer who mastered both siege and fortification. Vauban works both sides: break a model's safety alignment, or harden it against attacks.

What it does

Refusal in language models is mediated by a single direction in activation space (Arditi et al., 2024). Vauban operates directly on this geometry:

  • Measure a behavioral direction from the model's activations
  • Cut it from the weights (abliteration)
  • Probe per-layer projections to see what the model encodes
  • Steer generation at runtime by modifying activations mid-forward-pass
  • Map the full refusal surface across diverse prompts
  • Optimize cut parameters automatically (Optuna search)
  • Soft-prompt — optimize learnable prefixes in embedding space (GCG, continuous, EGD)
  • Sanitize inputs iteratively before they reach the model (SIC)
  • Detect whether a model has been hardened against abliteration

Everything runs natively on Apple Silicon via MLX — no CUDA, no Docker, no hooks. All configuration lives in TOML files.

Requirements

  • Apple Silicon Mac (M1 or later)
  • Python >= 3.12
  • uv package manager

Install

Use from PyPI (recommended)

uv tool install vauban
uv tool update-shell

Then open a new shell and run:

vauban --help

Install from source (development)

git clone https://github.com/teilomillet/vauban.git
cd vauban
uv tool install --editable .

Quick start

1. Open the built-in manual (start here):

vauban man quickstart

2. Generate a starter config (run.toml):

vauban init --mode default --output run.toml

This writes:

[model]
path = "mlx-community/Llama-3.2-3B-Instruct-4bit"

[data]
harmful = "default"
harmless = "default"

path is a HuggingFace model ID — it downloads automatically on first run. "default" uses the bundled prompt sets (128 harmful + 128 harmless).

3. Validate (recommended):

vauban --validate run.toml

Checks types, ranges, file paths, and mode conflicts — without loading any model. It also validates JSONL schemas (prompt/label/category) and prints actionable fix: hints for ambiguous or broken configs.

4. Run:

vauban run.toml

Output lands in output/ — a complete model directory you can load directly:

import mlx_lm
model, tok = mlx_lm.load("output")

Minimal TOML example

Copy this into run.toml:

[model]
path = "mlx-community/Llama-3.2-3B-Instruct-4bit"

[data]
harmful = "default"
harmless = "default"

Then run:

vauban --validate run.toml
vauban run.toml

Most useful commands

Use these before touching Python code:

vauban man
vauban man quickstart
vauban man commands
vauban man playbook
vauban man print

The manual is generated from typed config dataclasses plus parser constraints, so defaults and field types stay in sync with code.

Config scaffolding:

vauban init --help
vauban init --mode probe --output probe.toml

Report comparison:

vauban diff run_a/output run_b/output
vauban diff --format markdown run_a/output run_b/output
vauban diff --threshold 0.05 run_a/output run_b/output

--threshold is a CI gate: it exits with code 1 if any metric delta exceeds the threshold.

How the default pipeline works

  1. Measure — runs both prompt sets through the model, captures per-layer activations at the last token position, computes the difference-in-means, and picks the layer with the highest separation. Output: a refusal direction vector.
  2. Cut — removes the direction from each layer's weight matrices via rank-1 projection: W = W - alpha * (W @ d) * d.
  3. Export — writes modified weights + tokenizer + config as a loadable model directory.

Add [eval] for post-cut evaluation (refusal rate, perplexity, KL divergence) and [surface] for full refusal landscape mapping before and after the cut.

Pipeline modes

The TOML sections you include determine what vauban does. The default is measure-cut-export, but specialized sections activate different pipelines:

Section What it does Output
(default) Measure refusal direction, cut it, export modified model model directory
[surface] Map the refusal landscape before and after surface_report.json
[eval] Refusal rate, perplexity, KL divergence eval_report.json
[detect] Check if a model has been hardened against abliteration detect_report.json
[depth] Deep-thinking token analysis depth_report.json
[probe] Per-layer projection inspection probe_report.json
[steer] Runtime steered generation steer_report.json
[optimize] Optuna search for best cut parameters optimize_report.json
[softprompt] Optimize learnable prefixes in embedding space (GCG, continuous, EGD) softprompt_report.json
[sic] Iterative input sanitization (SIC) sic_report.json

Early-return precedence is: [depth] > [probe] > [steer] > [sic] > [optimize] > [softprompt]. Use --validate to catch conflicts.

Python API

For custom workflows beyond TOML configs:

import mlx_lm
from vauban import measure, cut, export_model, load_prompts, default_prompt_paths
from mlx.utils import tree_flatten

# Load model
model, tok = mlx_lm.load("mlx-community/Llama-3.2-3B-Instruct-4bit")

# Load prompt sets
harmful = load_prompts(default_prompt_paths()[0])
harmless = load_prompts(default_prompt_paths()[1])

# Measure the refusal direction
result = measure(model, tok, harmful, harmless)

# Cut it from the weights
weights = dict(tree_flatten(model.parameters()))
modified = cut(weights, result.direction, list(range(len(model.model.layers))))

# Export
export_model("mlx-community/Llama-3.2-3B-Instruct-4bit", modified, "output")

The API also exposes probe(), steer(), evaluate(), and map_surface() — see the getting-started guide for usage.

Documentation

Full docs at vauban.readthedocs.io.

Resource Description
Spinning Up in Abliteration Seven-part progressive curriculum — theory to production
Getting Started Guided walkthrough — all pipeline modes, data formats, Python API
Configuration Reference Every TOML field documented
Surface Mapping Surface mapping reference and dataset format
examples/config.toml Annotated config with every field documented

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vauban-0.2.5.tar.gz (289.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vauban-0.2.5-py3-none-any.whl (139.7 kB view details)

Uploaded Python 3

File details

Details for the file vauban-0.2.5.tar.gz.

File metadata

  • Download URL: vauban-0.2.5.tar.gz
  • Upload date:
  • Size: 289.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.16

File hashes

Hashes for vauban-0.2.5.tar.gz
Algorithm Hash digest
SHA256 afa751aef7211c22b1b372a93fae52bb4310e34d65082687aef038eecd71303f
MD5 7b1a33c59b9667a7201c3dead343c72a
BLAKE2b-256 6017382731fc61fb496985ece114f3a442d82be3b318960531a5d3deffdef03a

See more details on using hashes here.

File details

Details for the file vauban-0.2.5-py3-none-any.whl.

File metadata

  • Download URL: vauban-0.2.5-py3-none-any.whl
  • Upload date:
  • Size: 139.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.16

File hashes

Hashes for vauban-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 35ec7d4f6eeab94746b49098513b1a04f375c8fdf0f28904b059337714a49b80
MD5 7862868892d3b146c9530a9923da796c
BLAKE2b-256 bd4ca1a5ab073a10e5d779572e47d3d2e57efaf69afd0ddd1c429cb66457f784

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page