Skip to main content

A Verifier for Logical Reasoning Evaluation and Reinforcement Fine-Tuning of LLMs

Project description

SATQuest: A Verifier for Logical Reasoning Evaluation and Reinforcement Fine-Tuning of LLMs


PyPI License arXiv GitHub Repo HF Datasets

pytest pre-commit docs Ruff python versions

🧰 What's Inside

  • CNF-based problems spanning three orthogonal dimensions: instance scale, problem type, and question format.
  • A PySAT-backed verifier that scores binary answers and exposes solver metadata for reproducible diagnostics.
  • Ready-to-use datasets on Hugging Face, evaluation scripts, and RFT scripts.

⚡ Quickstart

# Install dependencies
# pip install datasets satquest

from datasets import load_dataset

from satquest import CNF, create_problem, create_question

item = load_dataset("sdpkjc/SATQuest", split="test")[0]
cnf = CNF(dimacs=item["sat_dimacs"])
# cnf.shuffle()

problem = create_problem("SATSP", cnf)  # or 'SATDP', 'MaxSAT', 'MCS', 'MUS'
question = create_question("math")  # or 'dimacs', 'story', 'dualstory'

prompt = problem.accept(question)
"""
Given a CNF formula with 3 variables and 12 clauses in mathematical notation:

(x_1 \lor x_2 \lor x_3) \land (x_3 \lor \neg x_1 \lor x_2) \land (x_1 \lor x_3 \lor \neg x_2) \land (x_1 \lor \neg x_2) \land (x_3 \lor x_1 \lor \neg x_2) \land (x_3 \lor \neg x_1 \lor x_2) \land (\neg x_3 \lor \neg x_1) \land (\neg x_1 \lor x_2 \lor x_3) \land (\neg x_2 \lor \neg x_3) \land (\neg x_2 \lor x_3 \lor x_1) \land (x_1 \lor \neg x_3) \land (\neg x_3 \lor \neg x_2 \lor \neg x_1)

Find a satisfying assignment for the formula.
Output a binary string of length 3 ('1' for true, '0' for false).
"""

answer = problem.solution  # reference answer
# 110

reward = int(problem.check(answer))  # 1 if answer is correct, 0 otherwise, 0.5 if answer is partial
# 1

🏭 Dataset Generation

Dataset on HF

# Run dataset generation
uv run --group gen gen_cnf_dataset.py --hf-entity {YOUR_HF_ENTITY} --seed 9527

uv run --group gen gen_cnf_rft_dataset.py --hf-entity {YOUR_HF_ENTITY} --seed 9527

📊 Evaluation

# Run evaluation
uv run --group eval eval_model.py \
    --exp-name {YOUR_EXP_NAME} \
    --wandb-project "SATQuest-Eval" \
    --hf-dataset-name "sdpkjc/SATQuest" \
    --p-type-list SATSP \
    --q-type-list math \
    --llm-model "gpt-4o" \
    --max-tokens 16384 \
    --temperature 0.6 \
    --num-example 10 \
    --stream True \
    --cnf-shuffle False \
    --n-repeat 1

The evaluation results will be logged to Weights & Biases.

🏄 Reinforcement Fine-Tuning (RFT)

# Run RFT training
CUDA_VISIBLE_DEVICES=0,1,2,3 nohup uv run --group rft trl vllm-serve --model "Qwen/Qwen2.5-7B-Instruct" --tensor_parallel_size 4 --max_model_len 16384  --gpu_memory_utilization 0.9 --enable_prefix_caching True &

CUDA_VISIBLE_DEVICES=4,5,6,7 uv run --group rft accelerate launch --num-processes 4 --config-file zero3.yaml rft.py \
    --model-id "Qwen/Qwen2.5-7B-Instruct" \
    --p-list SATSP --q-list math \
    --server-ip "0.0.0.0" \
    --exp-name "test"

📝 Citing SATQuest

@misc{zhao2025satquestverifierlogicalreasoning,
    title={SATQuest: A Verifier for Logical Reasoning Evaluation and Reinforcement Fine-Tuning of LLMs}, 
    author={Yanxiao Zhao and Yaqian Li and Zihao Bo and Rinyoichi Takezoe and Haojia Hui and Mo Guang and Lei Ren and Xiaolin Qin and Kaiwen Long},
    year={2025},
    eprint={2509.00930},
    archivePrefix={arXiv},
    primaryClass={cs.AI},
    url={https://arxiv.org/abs/2509.00930}, 
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

satquest-0.2.0a0.tar.gz (2.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

satquest-0.2.0a0-py3-none-any.whl (12.7 kB view details)

Uploaded Python 3

File details

Details for the file satquest-0.2.0a0.tar.gz.

File metadata

  • Download URL: satquest-0.2.0a0.tar.gz
  • Upload date:
  • Size: 2.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.8.22

File hashes

Hashes for satquest-0.2.0a0.tar.gz
Algorithm Hash digest
SHA256 8a133ba672056a26d569f5cf4a7fc5c8c81d7b9e57b9c5f2819734d4e22711f6
MD5 64f45b9316f16f7367b5440a6fc8c006
BLAKE2b-256 e3f57358b4d09a3ddf08a0da99f8650a75261122302febb38ba560f7b2039527

See more details on using hashes here.

File details

Details for the file satquest-0.2.0a0-py3-none-any.whl.

File metadata

  • Download URL: satquest-0.2.0a0-py3-none-any.whl
  • Upload date:
  • Size: 12.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.8.22

File hashes

Hashes for satquest-0.2.0a0-py3-none-any.whl
Algorithm Hash digest
SHA256 ce42219cbdd6e27ffeab9721f4581c0da3339bbfe83e6e33ad5e7afcf23504aa
MD5 00212305b44d50bf90f8572ba0070855
BLAKE2b-256 7065f8e5f8d486902f0d7a9ed665bcaa9156cdd0ec3f9332f1425f4bb833ba6c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page