CoTLab - Chain of Thought Research Toolkit for LLMs
Project description
CoTLab
A research toolkit for investigating Chain of Thought (CoT) reasoning, faithfulness, and mechanistic interpretability in Large Language Models.
Features
- Experiments for CoT faithfulness, patching, logit‑lens, steering, and probing
- Diverse prompt strategies (CoT, direct, adversarial, contrarian, few‑shot, etc.)
- Configurable models, datasets, and backends (vLLM + Transformers)
- Auto‑detect layers/heads at runtime
- Hydra config for easy composition and multiruns
- Project overview (DeepWiki): https://deepwiki.com/huseyincavusbi/CoTLab/1-overview
- Official docs: https://huseyincavusbi.github.io/CoTLab/
Installation
git clone https://github.com/huseyincavusbi/CoTLab.git
cd CoTLab
uv venv cotlab --python 3.11
source cotlab/bin/activate
uv pip install -e ".[dev]"
# GPU Setup:
# NVIDIA: uv pip install vllm
# AMD ROCm: ./scripts/cotlab-rocm.sh (uses Docker)
# AMD ROCm (Transformers backend): install ROCm PyTorch wheels
# uv pip install --reinstall --index-url https://download.pytorch.org/whl/rocm6.4 torch torchvision torchaudio
# Apple Silicon: requires Python 3.12 and vllm-metal plugin
# See docs/getting-started/installation.md for Metal setup instructions
See Installation Docs for detailed GPU setup.
Backend Compatibility
CoTLab supports two inference backends with different strengths:
1. vLLM Backend (High Performance)
Best for large-scale generation experiments.
- Supported Experiments:
cot_faithfulness,radiology - Supported Models: All text-only models (e.g.,
gemma_270m,medgemma_27b_text_it) - Limitation: Does NOT support activation patching or internal state access.
- Note: Gemma 3 multimodal models (e.g.,
medgemma_4b_it) are currently incompatible with vLLM 0.12.0 due to architecture detection issues. Usetransformersbackend for these.
2. Transformers Backend (Full Access)
Best for mechanistic interpretability and activation patching.
- Supported Experiments: ALL experiments.
- Supported Models: ALL models.
- Limitation: Slower.
To switch backends:
# Use vLLM (fast generation)
python -m cotlab.main backend=vllm ...
# Use Transformers (activation access)
python -m cotlab.main backend=transformers ...
Quick Start
# Run logit lens on MedGemma
python -m cotlab.main experiment=logit_lens model=medgemma_4b
# Find sycophancy heads
python -m cotlab.main experiment=sycophancy_heads model=medgemma_4b
# Test CoT ablation on pediatrics dataset
python -m cotlab.main experiment=cot_ablation dataset=pediatrics
# Compare prompt strategies
python -m cotlab.main -m prompt=chain_of_thought,direct_answer,sycophantic
Supported Models
CoTLab ships config files for some models, but in principle it supports any model that the selected backend can load. Mechanistic experiments can still fail for models with unusual architectures.
You can add a model config file for more control over hyperparameters, but you can also run any experiment by passing a Hugging Face model name directly.
# Use a built-in model config
python -m cotlab.main model=medgemma_4b
# Or pass any HF model name directly
python -m cotlab.main model.name=google/gemma-3-270m
If you prefer, you can pre-create model configs with cotlab-template, but
this is optional because CoTLab can auto-generate model configs when running
with model=org/repo-id.
Configuration
All configs auto-detect layers/heads at runtime. Override via CLI:
python -m cotlab.main \
model=medgemma_4b \
dataset=pediatrics \
prompt=chain_of_thought \
experiment.top_k=10
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cotlab-0.8.0.tar.gz.
File metadata
- Download URL: cotlab-0.8.0.tar.gz
- Upload date:
- Size: 205.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.24 {"installer":{"name":"uv","version":"0.9.24","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6ea9eaf9b51227e5c3fa1a756880d413fe6afdf678cb61807d07ffa9478ecf5f
|
|
| MD5 |
4065ed72a4c9b90bd24a9c7ac58b581e
|
|
| BLAKE2b-256 |
c4afd56b4ab7f17ed5b9ea7ee985a9f9a7e8e0827440b20f9395226e2353d811
|
File details
Details for the file cotlab-0.8.0-py3-none-any.whl.
File metadata
- Download URL: cotlab-0.8.0-py3-none-any.whl
- Upload date:
- Size: 188.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.24 {"installer":{"name":"uv","version":"0.9.24","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
18dd821c35a52a02c1866f7398c46ede58bdb4d1e38555e69fef06095cda262d
|
|
| MD5 |
95819fec876ced1230b17d89817c8ec3
|
|
| BLAKE2b-256 |
70e15f9711d43924f32ab3f6fa8983082f630ad046edbd7df0c57131fb7e25eb
|