C2|Q: Classical-to-Quantum software development framework
Project description
C2|Q>: Classical-to-Quantum Software Development Framework
Overview
C2|Q> is a modular framework for moving from classical problem specifications to quantum-ready problem representations, circuit generation, execution, and report generation.
This repository accompanies the article:
"C2|Q>: A Robust Framework for Bridging Classical and Quantum Software Development"
Accepted at ACM Transactions on Software Engineering and Methodology (TOSEM) (in press).
Preprint: arXiv:2510.02854
Artifact-review companion documents:
What To Run
Use these commands as the main entry points for the paper-backed artifact paths:
| Purpose | Command | Main output |
|---|---|---|
| Optional Docker image build | make docker-build |
Docker image c2q:latest |
| Experiment 1: parser training assets | notebook/manual assets | src/parser/ |
| Experiment 2: recommender multi-device variation | make recommender-maxcut |
artifacts/recommender_maxcut/ |
| Experiment 3: smoke reproduction | make reproduce-smoke |
artifacts/reproduce/smoke/ |
| Experiment 3: full paper reproduction | make reproduce-paper |
artifacts/reproduce/paper/ |
| Experiment 4: dataset validation | make validate-dataset |
artifacts/parser_validation/ |
All generated outputs from the make-based experiment paths are written under artifacts/.
Repository Layout
src/– framework source codesrc/parser/– parser code, training notebook, checkpoints, model helperssrc/c2q-dataset/– JSON inputs and dataset assetstools/– reproducibility and environment helpersscripts/– experiment orchestration scriptsartifacts/– generated outputs from reproducibility commands
Primary Reviewer Path
The primary reviewer path is a normal source checkout with a local virtual environment.
git clone https://github.com/C2-Q/C2Q.git
cd C2Q
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
python -m pip install --upgrade pip
python -m pip install -e ".[dev]"
Environment sanity check:
make doctor
Parser Model Setup
The parser model is not bundled in GitHub or PyPI because of file size.
Published model archive:
Recommended installation path:
- Download the archive in a browser from the Zenodo link above.
- Install it with:
python tools/setup_model.py --archive /path/to/saved_models_2025_12.zip --model-path src/parser/saved_models_2025_12
- Verify it:
make model-check
Optional helper:
make model-download
Use make model-download only as a convenience path. Manual archive download plus --archive is the most robust installation route across environments.
Required files inside the model directory:
config.jsontokenizer_config.json- one weight file:
model.safetensorsorpytorch_model.bin
Optional Docker Path
Use Docker only if you want a clean path that does not touch your current .venv.
Build the Docker image:
make docker-build
Run the main artifact commands in Docker:
make docker-smoke
make docker-recommender-maxcut
make docker-validate-dataset
make docker-paper
Notes:
- Docker commands use a separate virtual environment path:
/tmp/c2q-venvinside the container - Your existing
.venvis not reused - Outputs still appear in the repository under
artifacts/ - The parser model is still required; see the next section
make docker-smokeis the recommended first Docker check; longer Docker targets are available but slower
Experiments Used In The Paper
Experiment 1: Parser Training and Saved Model
This experiment is represented by the parser training notebook and its training outputs.
Main assets:
- notebook:
src/parser/parser_train_results_12_1.ipynb - intermediate checkpoints:
src/parser/results/ - released trained model archive: Zenodo model zip
This experiment is notebook-driven rather than make-driven.
Experiment 2: Recommender Multi-Device Variation
Run:
make recommender-maxcut
Outputs:
- raw recommender CSVs and plots:
artifacts/recommender_maxcut/raw_csv/ - post-processed Algorithm 1 outputs:
artifacts/recommender_maxcut/algorithm1/
Key files:
artifacts/recommender_maxcut/raw_csv/errors_wide.csvartifacts/recommender_maxcut/raw_csv/times_wide.csvartifacts/recommender_maxcut/raw_csv/prices_wide.csvartifacts/recommender_maxcut/raw_csv/recommender_output_errors.pdfartifacts/recommender_maxcut/raw_csv/recommender_output_prices.pdfartifacts/recommender_maxcut/raw_csv/recommender_output_times.pdfartifacts/recommender_maxcut/algorithm1/winners.csvartifacts/recommender_maxcut/algorithm1/details.csv
Experiment 3: Report Reproduction
Smoke run:
make reproduce-smoke
Full paper run:
make reproduce-paper
The full paper run is time-consuming and takes roughly 10 hours.
Outputs:
- smoke path:
artifacts/reproduce/smoke/ - paper path:
artifacts/reproduce/paper/
This path reproduces the artifacts corresponding to the C2Q data record used in paper evaluation.
Experiment 4: Dataset Validation
Run:
make validate-dataset
Outputs:
- implementation-level validation:
artifacts/parser_validation/implementation/ - algorithmic/structural validation:
artifacts/parser_validation/diversity/
Key files:
artifacts/parser_validation/implementation/snippet_metrics.csvartifacts/parser_validation/implementation/family_summary.csvartifacts/parser_validation/implementation/syntax_failures.csvartifacts/parser_validation/diversity/summary_by_tag.csvartifacts/parser_validation/diversity/algorithm_diversity_summary.csvartifacts/parser_validation/diversity/algorithm_signals_per_instance.csv
Tests
Fast default tests:
PYTHONPATH=. pytest
Model-backed tests:
make verify-model
PyPI Installation
For lightweight CLI/API use without cloning the repo:
python -m pip install --upgrade pip
python -m pip install --upgrade c2q-framework
Optional extras:
python -m pip install --upgrade "c2q-framework[parser]"
python -m pip install --upgrade "c2q-framework[recommender]"
python -m pip install --upgrade "c2q-framework[artifact]"
python -m pip install --upgrade "c2q-framework[cloud]"
Use them as follows:
parser: local parser model supportrecommender: CSV export and experiment helpersartifact: paper-backed local artifact path from a source checkoutcloud: optional live-provider SDK integrations
Check the installed version:
python -m pip show c2q-framework
CLI help:
c2q-json -h
Programming Interface
Current import namespace is src.*.
JSON DSL from Python:
from src.json_engine import load_input, normalise_task
task = load_input("min_add.json")
family, instance, params, goal = normalise_task(task)
print(family, instance)
Parser usage:
from src.parser.parser import Parser
parser = Parser(model_path="/path/to/saved_models_2025_12")
family, data = parser.parse("def add(a,b):\n return a+b\n")
print(family, type(data).__name__)
The parser API requires the parser extra in PyPI installs.
Generate a report via Python API:
from src.graph import Graph
from src.problems.maximal_independent_set import MIS
edges = [[0, 1], [1, 2], [2, 3], [0, 3], [0, 2]]
problem = MIS(Graph(edges).G)
problem.report_latex(output_path="API_demo_report")
JSON DSL CLI Example
Repository example:
c2q-json --input src/c2q-dataset/inputs/json/mis/mis_04.json
This command parses the JSON problem, generates the quantum workflow, and writes a PDF report.
Regenerate the maintained JSON DSL example set under src/c2q-dataset/inputs/json_dsl/:
make json-dsl-examples
Generate PDF reports for a curated smoke subset of those JSON DSL examples:
make reproduce-json-smoke
The curated smoke subset currently includes one example each for ADD, Factor, MaxCut, and MIS.
Generate PDF reports for the full JSON DSL example set:
make reproduce-json-full
Outputs are written to:
- smoke:
artifacts/reproduce/json/smoke/ - full:
artifacts/reproduce/json/full/
The full JSON reproduction path is intentionally not run by default here because it is slow and takes roughly 2 hours.
Architecture
Detailed component diagrams are available in src/assets/classiq_flow.pdf.
Contact
For research collaboration or substantial contributions:
- boshuai.ye@oulu.fi
- Teemu.Pihkakoski@oulu.fi
- arif.khan@oulu.fi (Project Principal Investigator, PI)
- matti.silveri@oulu.fi (Project Principal Investigator, PI)
- liangp@whu.edu.cn (Outside Collaborator, Peng Liang)
License
This project is licensed under the Apache 2.0 License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file c2q_framework-0.1.2.tar.gz.
File metadata
- Download URL: c2q_framework-0.1.2.tar.gz
- Upload date:
- Size: 211.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0da0d6e0f7ff7a2c7542b9019b3c6bbab6f674087d530967ca7dadb4503b7c53
|
|
| MD5 |
5c274d74b60488a5d819fc00eac6d5a2
|
|
| BLAKE2b-256 |
f2be84ace84100cabcc1ffbfe0add91dd1a5111296b735d1b6fcb6a9bf1d493a
|
File details
Details for the file c2q_framework-0.1.2-py3-none-any.whl.
File metadata
- Download URL: c2q_framework-0.1.2-py3-none-any.whl
- Upload date:
- Size: 228.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5f452bcd6247ef3e693fad4c369214e260971299e9549e8f3f3e8b03cd9eae5c
|
|
| MD5 |
535ae3f9ab83f33e4e1deb8d67b97cf0
|
|
| BLAKE2b-256 |
e3cb3c3d390ca4013e5d5f433e1e430389953f4540a09058df6f2b145c697c1e
|