A tiny synthetic QA LLM benchmark dataset generator using LiteLLM.
Project description
TQB++ Synthetic Data Generation Toolkit
This directory contains the Python script and utilities for generating synthetic Question-Answering (QA) micro-benchmarks for the Tiny QA Benchmark++ (TQB++) suite.
Overview
The generator toolkit is a core component of TQB++, enabling the creation of bespoke tiny QA datasets. It is implemented as a Python script (approximately 200 lines of core logic) that leverages the LiteLLM library for provider-agnostic calls to various Large Language Models (LLMs).
Reference: See Section 3 ("Synthetic Data Generation Toolkit") of the TQB++ paper for a detailed description.
Features
- Provider Agnostic: Uses LiteLLM to connect to a wide range of LLM providers (e.g., OpenAI, Anthropic, Cohere, Google, and any OpenAI-compatible API).
- Customizable Output: Users can specify parameters to tailor the generated datasets:
--num: Number of questions to generate.--languages: Comma-separated list of language codes (e.g.,en,fr,ja).--categories: Topics/domains for the questions (e.g.,history,math,science).--difficulty: Desired difficulty level (e.g.,easy,medium,hard).--provider: The LLM endpoint/model to use for generation.
- Structured Output: The generator is prompted to produce JSON formatted output adhering to the TQB schema (
text,label,context,tags: {category, difficulty}). - Few-Shot Prompting: Includes few-shot exemplars in the prompt to guide the LLM on the desired format and content style.
- Schema Validation: Performs basic validation of the generated JSON structure.
- Provenance Tracking: Stamps each generated item with a SHA-256 hash for reproducibility and provenance.
How to Run
The generator script (e.g., generator.py or by invoking the package tinyqabenchmarkpp.generate) can be run from the command line.
Important Note on Temperature: When using OpenAI reasoning models for generation (e.g., openai/o4-mini), you need to use temperature=1.0 as the script has no logic to handle this. Default is 0 to encourage reproduceability as detailed in the TQB++ paper (Appendix A.1).
Here are conceptual examples:
1. Using a specific OpenAI model:
python -m tinyqabenchmarkpp.generate \
--num 10 \
--languages "en" \
--categories "science" \
--difficulty "medium" \
--provider "openai/gpt-3.5-turbo-0125" \
--output_dir "./data/packs/science_en_10.jsonl"
# --temperature 1.0 # Explicitly set if needed, often a default
2. Using an OpenRouter model:
LiteLLM allows you to use models hosted on OpenRouter. You'll need to set your OPENROUTER_API_KEY environment variable.
# Ensure OPENROUTER_API_KEY is set in your environment
export OPENROUTER_API_KEY="your_openrouter_key_here"
python -m tinyqabenchmarkpp.generate \
--num 15 \
--languages "de" \
--categories "history" \
--difficulty "easy" \
--provider "openrouter/google/gemma-7b-it" \
--output_dir "./data/packs/history_de_15.jsonl"
3. Using a local Ollama model:
To use a model served locally via Ollama, ensure your Ollama server is running and the desired model is pulled (e.g., ollama pull llama3).
python -m tinyqabenchmarkpp.generate \
--num 5 \
--languages "es" \
--categories "literature" \
--difficulty "hard" \
--provider "ollama/llama3" \
--output_dir "./data/packs/literature_es_5_hard.jsonl" \
--base_url "http://localhost:11434" # Specify your Ollama API base URL
(Note: Actual script name, package invocation, and parameters might vary slightly. Refer to the script's help message (python -m tinyqabenchmarkpp.generate --help) for precise usage and all available options, including how to pass API keys if not using environment variables.)
Generation Process
- System Prompt: Instructs the LLM to output structured JSON according to the TQB schema.
- Few-Shot Exemplars: Provides 2 examples to the LLM.
- User Prompt: Specifies the number, language, category, and difficulty of questions required.
- LLM Call: Sends the request to the chosen LLM via LiteLLM.
- Parsing & Validation: Parses the LLM response, validates the JSON structure, and includes a retry mechanism (up to 3 attempts) for malformed outputs.
- Hashing & Storage: Stores a SHA-256 hash of each item and saves the output.
License
This toolkit is part of the TQB++ project and is licensed under the Apache License 2.0. See the main LICENSE file in the root of the repository for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tinyqabenchmarkpp-1.2.3.tar.gz.
File metadata
- Download URL: tinyqabenchmarkpp-1.2.3.tar.gz
- Upload date:
- Size: 13.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
33e21a9d6492c29472d04fdabd63c53e0a29e4f88c56a4e1969d9097e618dfa8
|
|
| MD5 |
3403e7c9e482d7eb680b2701777f1792
|
|
| BLAKE2b-256 |
3bfc5b8e9f5d88d0e9f37eb0f934ed578e860e66090a438896ea3433e5ae7b09
|
Provenance
The following attestation bundles were made for tinyqabenchmarkpp-1.2.3.tar.gz:
Publisher:
publish-pypi.yml on vincentkoc/tiny_qa_benchmark_pp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
tinyqabenchmarkpp-1.2.3.tar.gz -
Subject digest:
33e21a9d6492c29472d04fdabd63c53e0a29e4f88c56a4e1969d9097e618dfa8 - Sigstore transparency entry: 215203942
- Sigstore integration time:
-
Permalink:
vincentkoc/tiny_qa_benchmark_pp@10d4f8bd24d4a451dc17843b5b251b4fe1b3cfaf -
Branch / Tag:
refs/tags/v1.2.3 - Owner: https://github.com/vincentkoc
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@10d4f8bd24d4a451dc17843b5b251b4fe1b3cfaf -
Trigger Event:
release
-
Statement type:
File details
Details for the file tinyqabenchmarkpp-1.2.3-py3-none-any.whl.
File metadata
- Download URL: tinyqabenchmarkpp-1.2.3-py3-none-any.whl
- Upload date:
- Size: 13.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
de93323945dcb823838035789f870861a575c5a076690d69fb0499a42c22d97e
|
|
| MD5 |
065ee3f002974f802f570fb07fa69bf3
|
|
| BLAKE2b-256 |
1b165e13b531097fb14108b6ce6c11956138b923447496979b42232352da35b7
|
Provenance
The following attestation bundles were made for tinyqabenchmarkpp-1.2.3-py3-none-any.whl:
Publisher:
publish-pypi.yml on vincentkoc/tiny_qa_benchmark_pp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
tinyqabenchmarkpp-1.2.3-py3-none-any.whl -
Subject digest:
de93323945dcb823838035789f870861a575c5a076690d69fb0499a42c22d97e - Sigstore transparency entry: 215203943
- Sigstore integration time:
-
Permalink:
vincentkoc/tiny_qa_benchmark_pp@10d4f8bd24d4a451dc17843b5b251b4fe1b3cfaf -
Branch / Tag:
refs/tags/v1.2.3 - Owner: https://github.com/vincentkoc
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@10d4f8bd24d4a451dc17843b5b251b4fe1b3cfaf -
Trigger Event:
release
-
Statement type: