Bias detection and debiasing using a single LLM
Project description
unbias-plus
Bias detection and debiasing using a single LLM. Analyze text for biased language, get structured results (binary label, severity, biased segments with replacements and reasoning), and a neutral rewrite—all via one fine-tuned causal language model.
Overview
Single-model pipeline: one HuggingFace causal LM does both detection and debiasing. Input text → prompt → LLM → JSON → validated BiasResult (and optional CLI/API formatting). Entry points: CLI (unbias-plus), REST API (FastAPI + demo UI), or Python (UnBiasPlus).
Project structure:
unbias-plus/
├── src/unbias_plus/
│ ├── __init__.py # UnBiasPlus, BiasResult, BiasedSegment, serve
│ ├── cli.py # unbias-plus entry point (--text, --file, --serve)
│ ├── api.py # FastAPI app, /health, /analyze, serve()
│ ├── pipeline.py # UnBiasPlus: prompt → model → parse → result
│ ├── model.py # UnBiasModel: load LM, generate(), 4-bit optional
│ ├── prompt.py # build_prompt(text), system prompt
│ ├── parser.py # parse_llm_output() → BiasResult
│ ├── schema.py # BiasResult, BiasedSegment (Pydantic)
│ ├── formatter.py # format_cli, format_dict, format_json
│ └── demo/ # bundled web UI (served at / when using --serve)
│ ├── static/ # script.js, style.css
│ └── templates/ # index.html
├── tests/
│ ├── conftest.py # fixtures (sample_result, sample_json, …)
│ └── unbias_plus/ # test_api, test_pipeline, test_parser, …
├── pyproject.toml
└── README.md
Features
- Single-model pipeline: One HuggingFace causal LM handles both detection and debiasing (no separate classifier + generator).
- Structured output: Pydantic-validated results with
binary_label(biased/unbiased), overallseverity(1–5),biased_segments(original phrase, replacement, severity, bias type, reasoning, character offsets), and fullunbiased_text. - Demo UI:
--servelaunches a FastAPI server that also serves a visual web interface athttp://localhost:8000— no separate frontend server needed. - CLI: Analyze from command line with
--text,--file, or start the API + UI with--serve. Optional 4-bit quantization and JSON output. - REST API: FastAPI server with
/healthand/analyze(POST JSON{"text": "..."}). Model loaded at startup via lifespan. - Python API: Use
UnBiasPlusin code; callanalyze(),analyze_to_cli(),analyze_to_dict(), oranalyze_to_json().
Requirements
- Python ≥3.10, <3.12
- CUDA 12.4 recommended (PyTorch + CUDA deps in
pyproject.toml). CPU is supported withdevice="cpu".
Installation
The project uses uv for dependency management. Install uv, then from the project root:
uv sync
source .venv/bin/activate # or .venv\Scripts\activate on Windows
For development (tests, linting, type checking):
uv sync --dev
source .venv/bin/activate
Optional: flash-attn (GPU only)
For training or faster inference with flash attention, install the train extra (requires CUDA/nvcc to build):
uv sync --extra train
# On HPC: load CUDA first, e.g. module load cuda/12.4.0
Default uv sync does not install flash-attn, so CI and CPU-only setups work without it.
Usage
Command line
# Analyze a string
unbias-plus --text "Women are too emotional to lead."
# Analyze a file, output JSON
unbias-plus --file article.txt --json
# Start API server + demo UI (default model, port 8000)
unbias-plus --serve
unbias-plus --serve --model path/to/model --port 8000
unbias-plus --serve --load-in-4bit # reduce VRAM
Options: --model, --load-in-4bit, --max-new-tokens, --host, --port, --json.
Test the model (CLI)
After uv sync (and optionally uv sync --extra train on a GPU machine), verify the pipeline with:
# Default install (no flash-attn); use a small model or --load-in-4bit on GPU
uv run unbias-plus --text "Women are too emotional to lead."
# With your own model path
uv run unbias-plus --text "Some biased sentence." --model path/to/your/model
# JSON output
uv run unbias-plus --text "Test." --json
Or in Python (same env):
uv run python -c "
from unbias_plus import UnBiasPlus
pipe = UnBiasPlus() # or UnBiasPlus('your-model-id', load_in_4bit=True)
text = 'Women are too emotional to lead.'
print(pipe.analyze_to_cli(text))
"
REST API + Demo UI
Start the server with unbias-plus --serve (or serve() in Python). This starts a single FastAPI server that:
- Serves the visual demo UI at
http://localhost:8000/ - Exposes
GET /health→{"status": "ok", "model": "<model_name_or_path>"} - Exposes
POST /analyze→ Body:{"text": "Your text here"}. Returns JSON matchingBiasResult.
Programmatic start:
from unbias_plus import serve
serve("your-hf-model-id", port=8000, load_in_4bit=False)
Running on a remote server or HPC node: If the server is running on a remote machine, use SSH port forwarding to access the UI in your browser:
ssh -L 8000:localhost:8000 user@your-server.com # or through a login node to a compute node: ssh -L 8000:gpu-node-hostname:8000 user@login-node.comThen open
http://localhost:8000. If port 8000 is already in use locally, use a different local port (e.g.-L 8001:...) and openhttp://localhost:8001.If you're using VS Code remote SSH, port forwarding is handled automatically via the Ports tab.
Python API
from unbias_plus import UnBiasPlus, BiasResult, BiasedSegment
pipe = UnBiasPlus("your-hf-model-id", load_in_4bit=False)
result = pipe.analyze("Women are too emotional to lead.")
print(result.binary_label) # "biased" | "unbiased"
print(result.severity) # 1–5
print(result.bias_found) # bool
for seg in result.biased_segments:
print(seg.original, seg.replacement, seg.severity, seg.bias_type, seg.reasoning)
print(seg.start, seg.end) # character offsets in original text
print(result.unbiased_text) # full neutral rewrite
# Formatted outputs
cli_str = pipe.analyze_to_cli("...") # human-readable colored terminal output
d = pipe.analyze_to_dict("...") # plain dict
json_str = pipe.analyze_to_json("...") # pretty-printed JSON string
Development
- Tests:
pytest(seepyproject.tomlfor markers). Run from repo root:uv run pytest tests/. - Linting / formatting:
ruff(format + lint), config inpyproject.toml. - Type checking:
mypywith strict options,mypy_path = "src".
👥 Team
Developed by the AI Engineering team at the Vector Institute.
| Ahmed Y. Radwan | Sindhuja Chaduvula | Shaina Raza |
|---|---|---|
| Vector Institute | Vector Institute | Vector Institute |
Acknowledgement
Resources used in preparing this research are provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.
This research is also supported by the European Union's Horizon Europe research and innovation programme under the AIXPERT project (Grant Agreement No. 101214389).
License
Licensed under the Apache License 2.0. See LICENSE in the repository.
Support
- Open an issue on GitHub: https://github.com/VectorInstitute/unbias-plus/issues
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file unbias_plus-0.1.2.tar.gz.
File metadata
- Download URL: unbias_plus-0.1.2.tar.gz
- Upload date:
- Size: 365.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ec1e9876c6ce71921bd0dad4a16a300af0bfa79da3cbcffe18fa897a05b73d8c
|
|
| MD5 |
3b2113596793349cfe65e9fdfd35e191
|
|
| BLAKE2b-256 |
e64387e7f79f471f074fadd441b8564dba51b7f4498bfe31c52b6fc7270aa008
|
File details
Details for the file unbias_plus-0.1.2-py3-none-any.whl.
File metadata
- Download URL: unbias_plus-0.1.2-py3-none-any.whl
- Upload date:
- Size: 48.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2a15f2a436a26cca5d3af2243af8923b95782a8b6adc55330c405ea59f8b4731
|
|
| MD5 |
f6aeb020c4cc293dbe6cbab73ce6e18d
|
|
| BLAKE2b-256 |
20cade93ccedaf8c22727b700eb1fb1aeb0609e13806bb4147d8fc277a8e8663
|