Tools for preventing think collapse in reasoning language models.
Project description
ThinkPack
A lightweight toolkit for working with reasoning blocks in language models — preventing think collapse via los masking, steering reasoning at inference time, and parsing model outputs.
Think collapse is a failure mode where reasoning models stop using their <think>...</think> blocks during or after fine-tuning.
Without intervention, the model learns to skip reasoning entirely — producing answers directly and losing the chain-of-thought behaviour it was trained on.
ThinkPack provides three targeted tools to prevent this:
- Loss masking (
thinkpack.mask) — keeps reasoning blocks in the training context while masking them from the loss, so the model doesn't learn to skip them. - Thought steering (
thinkpack.steer) — injects a short primer after the opening reasoning tag at inference time, nudging the model to reason before answering. - Response parsing (
thinkpack.parse) — splits raw model output into reasoning and answer components, with flags for truncation detection.
Installation
pip install thinkpack
Modules
thinkpack.mask — Training-time loss masking
When fine-tuning a reasoning model, naively training on all tokens can cause the model to learn to skip its reasoning block entirely. mask() formats your training records into a pretokenized HuggingFace dataset with selected parts of the sequence excluded from the loss.
import thinkpack
dataset = thinkpack.mask(
records=records, # list of dicts with "instruction" and "response" keys
tokenizer=tokenizer,
masked=thinkpack.Mask.THINK, # mask only the think block (default)
)
The masked parameter is a composable flag — combine sections with |:
| Value | Effect |
|---|---|
Mask.THINK |
Think block hidden from loss; model trains on prompt + response |
Mask.PROMPT | Mask.THINK |
Train on response only |
None |
No masking; all tokens contribute to the loss |
Model-specific template handling (Qwen3's native reasoning_content field, OLMo-3's auto-injected opening tag) is detected automatically from the tokenizer — no manual configuration needed.
See examples/training.py for a complete training loop.
thinkpack.steer — Inference-time thought steering
Think collapse can also be addressed at inference time by injecting a short prefix after the opening reasoning tag, seeding the model's reasoning before it generates its own thought content.
# ensure the opening reasoning tag is present without seeding the thought
steered_prompts = thinkpack.steer(
prompts=templated_prompts, # already chat-templated strings
tokenizer=tokenizer,
)
# seed the model's thought with a preset
steered_prompts = thinkpack.steer(
prompts=templated_prompts,
tokenizer=tokenizer,
prefix=thinkpack.SimplePrefix.CONCISE,
)
# or pass any custom string
steered_prompts = thinkpack.steer(
prompts=templated_prompts,
tokenizer=tokenizer,
prefix="Okay, this is a tricky one. Let me consider each part carefully.",
)
SimplePrefix provides a few basic presets:
| Preset | Text |
|---|---|
BRIEF |
"Okay, " |
STEPS |
"Okay, let me think this through step by step." |
CONCISE |
"Okay, let me think this through, but I need to be concise and make sure I also provide an answer." |
steer() handles the PREFIXED template quirk automatically: models like OLMo-3 whose chat template already ends with an opening reasoning tag do not get a duplicate tag injected.
See examples/inference.py for a complete inference loop.
thinkpack.parse — Response parsing
Parse raw model outputs into structured components — useful for evaluation, analysis, and hybrid decoding pipelines.
# single response
parsed = thinkpack.parse(response=raw_text)
parsed.answer # str — text after the closing reasoning tag
parsed.reasoning # str — content of the reasoning block
parsed.has_valid_reasoning # bool — non-empty, completed reasoning block
parsed.has_truncated_reasoning # bool — reasoning block started but never closed
# directly from vLLM output objects (single output → list, list of outputs → list[list])
parsed = thinkpack.parse_output(output=outputs)
Handles all four output formats:
| Format | Example |
|---|---|
| Standard | <think>reasoning</think>answer |
| Prefixed template | reasoning</think>answer (opening tag injected by template) |
| Truncated standard | <think>reasoning... (no closing tag) |
| Truncated prefixed | reasoning... (pass prefixed=True) |
Recognises tag variants: think, thinking, reasoning, thought (case-insensitive).
thinkpack.distill — Distillation prompt building and reasoning extraction
When training data lacks reasoning traces, distill helps construct them. It builds prompts that ask a teacher model to produce a reasoning trace given a question and its known answer, then extracts and writes those traces back into your records.
import thinkpack
# build prompts for a teacher model to generate reasoning traces
prompts = thinkpack.build_prompts(
records=records, # list of dicts with "instruction" and "response" keys
)
# after generating responses from the teacher model, extract the traces
traces = thinkpack.extract_reasoning(text=responses, tag="reasoning_trace")
# or write traces back into records in one step
records = thinkpack.update_records(
records=records,
responses=responses,
field="reasoning", # key to write extracted traces into
)
extract_reasoning accepts a single string or a list, and returns None where extraction fails (blank or no tag found):
# single response — returns str | None
trace = thinkpack.extract_reasoning(text=response)
# list of responses — returns list[str | None]
traces = thinkpack.extract_reasoning(text=responses)
thinkpack.hybrid — Hybrid decoding
Hybrid decoding separates reasoning from answering across two model variants: the base model generates the reasoning block freely (without fine-tuning influence), and the fine-tuned adapter generates the final answer conditioned on that reasoning. This can improve answer quality when the adapter has partially collapsed.
Requires vLLM with enable_lora=True.
from thinkpack import hybrid_generate, SimplePrefix
# steered_prompts = prompts already ending with an open reasoning tag (from steer())
results = thinkpack.hybrid_generate(
prompts=steered_prompts,
llm=llm, # vLLM LLM loaded with enable_lora=True
lora_request=lora_request, # adapter used for phase 2
sampling_params=sampling_params,
)
for r in results:
r.reasoning # str — reasoning produced by the base model
r.answer # str — answer produced by the fine-tuned model
r.raw # str — full combined string for convenience
development
Clone the repository code:
git clone https://github.com/itsluketwist/thinkpack.git
We use uv for project management.
Once cloned, create a virtual environment and install the project with dev dependencies:
python -m venv .venv
. .venv/bin/activate
pip install uv
uv sync
Use make commands to lint and test:
make lint
make test
Use uv to add new dependencies into the project:
uv add transformers
Or to upgrade dependencies:
uv sync --upgrade
Check typings with ty:
uv run --extra dev ty check src tests
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file thinkpack-0.0.2.tar.gz.
File metadata
- Download URL: thinkpack-0.0.2.tar.gz
- Upload date:
- Size: 24.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f797317b8ff34a68b91443aad7fdfcb76afaef606b93c1702f9d491547faf388
|
|
| MD5 |
cd55f3de62463d854f0ffafe0c52bb25
|
|
| BLAKE2b-256 |
a277cd2f08d84dc03c0345dcc1badc6cf444f5c508d55bf6072b62b5dac9da92
|
Provenance
The following attestation bundles were made for thinkpack-0.0.2.tar.gz:
Publisher:
release.yaml on itsluketwist/thinkpack
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
thinkpack-0.0.2.tar.gz -
Subject digest:
f797317b8ff34a68b91443aad7fdfcb76afaef606b93c1702f9d491547faf388 - Sigstore transparency entry: 1280981715
- Sigstore integration time:
-
Permalink:
itsluketwist/thinkpack@1d603260d4c82186d7d6446a2c27fb0390c1fa2c -
Branch / Tag:
refs/tags/v0.0.2 - Owner: https://github.com/itsluketwist
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yaml@1d603260d4c82186d7d6446a2c27fb0390c1fa2c -
Trigger Event:
release
-
Statement type:
File details
Details for the file thinkpack-0.0.2-py3-none-any.whl.
File metadata
- Download URL: thinkpack-0.0.2-py3-none-any.whl
- Upload date:
- Size: 19.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cd26ca435b74147151d2648256a345ab74c2efcf5d4e6298018f01d834704ede
|
|
| MD5 |
a7deff1dcab9f779e259f772bffb23fa
|
|
| BLAKE2b-256 |
7d35aa56528945a8d77b80c9d1e6b40f73069e87b22e81d8916953a111d642f4
|
Provenance
The following attestation bundles were made for thinkpack-0.0.2-py3-none-any.whl:
Publisher:
release.yaml on itsluketwist/thinkpack
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
thinkpack-0.0.2-py3-none-any.whl -
Subject digest:
cd26ca435b74147151d2648256a345ab74c2efcf5d4e6298018f01d834704ede - Sigstore transparency entry: 1280981718
- Sigstore integration time:
-
Permalink:
itsluketwist/thinkpack@1d603260d4c82186d7d6446a2c27fb0390c1fa2c -
Branch / Tag:
refs/tags/v0.0.2 - Owner: https://github.com/itsluketwist
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yaml@1d603260d4c82186d7d6446a2c27fb0390c1fa2c -
Trigger Event:
release
-
Statement type: