Skip to main content

C2|Q: Classical-to-Quantum software development framework

Project description

C2|Q>: Classical-to-Quantum Software Development Framework

License: Apache-2.0 Python 3.10-3.12 Status: Research Prototype

Overview

C2|Q> is a modular framework for moving from classical problem specifications to quantum-ready problem representations, circuit generation, execution, and report generation.

This repository accompanies the article:

"C2|Q>: A Robust Framework for Bridging Classical and Quantum Software Development"
Accepted at ACM Transactions on Software Engineering and Methodology (TOSEM) (in press).
Preprint: arXiv:2510.02854

Artifact-review companion documents:

What To Run

Use these commands as the main entry points for the paper-backed artifact paths:

Purpose Command Main output
Optional Docker image build make docker-build Docker image c2q:latest
Experiment 1: parser training assets notebook/manual assets src/parser/
Experiment 2: recommender multi-device variation make recommender-maxcut artifacts/recommender_maxcut/
Experiment 3: smoke reproduction make reproduce-smoke artifacts/reproduce/smoke/
Experiment 3: full paper reproduction make reproduce-paper artifacts/reproduce/paper/
Experiment 4: dataset validation make validate-dataset artifacts/parser_validation/

All generated outputs from the make-based experiment paths are written under artifacts/.

Repository Layout

  • src/ – framework source code
  • src/parser/ – parser code, training notebook, checkpoints, model helpers
  • src/c2q-dataset/ – JSON inputs and dataset assets
  • tools/ – reproducibility and environment helpers
  • scripts/ – experiment orchestration scripts
  • artifacts/ – generated outputs from reproducibility commands

Primary Reviewer Path

The primary reviewer path is a normal source checkout with a local virtual environment.

git clone https://github.com/C2-Q/C2Q.git
cd C2Q
python3 -m venv .venv
source .venv/bin/activate   # Windows: .venv\Scripts\activate
python -m pip install --upgrade pip
python -m pip install -e ".[dev]"

Environment sanity check:

make doctor

Parser Model Setup

The parser model is not bundled in GitHub or PyPI because of file size.

Published model archive:

Recommended installation path:

  1. Download the archive in a browser from the Zenodo link above.
  2. Install it with:
python tools/setup_model.py --archive /path/to/saved_models_2025_12.zip --model-path src/parser/saved_models_2025_12
  1. Verify it:
make model-check

Optional helper:

make model-download

Use make model-download only as a convenience path. Manual archive download plus --archive is the most robust installation route across environments.

Required files inside the model directory:

  • config.json
  • tokenizer_config.json
  • one weight file: model.safetensors or pytorch_model.bin

Optional Docker Path

Use Docker only if you want a clean path that does not touch your current .venv.

Build the Docker image:

make docker-build

Run the main artifact commands in Docker:

make docker-smoke
make docker-recommender-maxcut
make docker-validate-dataset
make docker-paper

Notes:

  • Docker commands use a separate virtual environment path: /tmp/c2q-venv inside the container
  • Your existing .venv is not reused
  • Outputs still appear in the repository under artifacts/
  • The parser model is still required; see the next section
  • make docker-smoke is the recommended first Docker check; longer Docker targets are available but slower

Experiments Used In The Paper

Experiment 1: Parser Training and Saved Model

This experiment is represented by the parser training notebook and its training outputs.

Main assets:

  • notebook: src/parser/parser_train_results_12_1.ipynb
  • intermediate checkpoints: src/parser/results/
  • released trained model archive: Zenodo model zip

This experiment is notebook-driven rather than make-driven.

Experiment 2: Recommender Multi-Device Variation

Run:

make recommender-maxcut

Outputs:

  • raw recommender CSVs and plots: artifacts/recommender_maxcut/raw_csv/
  • post-processed Algorithm 1 outputs: artifacts/recommender_maxcut/algorithm1/

Key files:

  • artifacts/recommender_maxcut/raw_csv/errors_wide.csv
  • artifacts/recommender_maxcut/raw_csv/times_wide.csv
  • artifacts/recommender_maxcut/raw_csv/prices_wide.csv
  • artifacts/recommender_maxcut/raw_csv/recommender_output_errors.pdf
  • artifacts/recommender_maxcut/raw_csv/recommender_output_prices.pdf
  • artifacts/recommender_maxcut/raw_csv/recommender_output_times.pdf
  • artifacts/recommender_maxcut/algorithm1/winners.csv
  • artifacts/recommender_maxcut/algorithm1/details.csv

Experiment 3: Report Reproduction

Smoke run:

make reproduce-smoke

Full paper run:

make reproduce-paper

The full paper run is time-consuming and takes roughly 10 hours.

Outputs:

  • smoke path: artifacts/reproduce/smoke/
  • paper path: artifacts/reproduce/paper/

This path reproduces the artifacts corresponding to the C2Q data record used in paper evaluation.

Experiment 4: Dataset Validation

Run:

make validate-dataset

Outputs:

  • implementation-level validation: artifacts/parser_validation/implementation/
  • algorithmic/structural validation: artifacts/parser_validation/diversity/

Key files:

  • artifacts/parser_validation/implementation/snippet_metrics.csv
  • artifacts/parser_validation/implementation/family_summary.csv
  • artifacts/parser_validation/implementation/syntax_failures.csv
  • artifacts/parser_validation/diversity/summary_by_tag.csv
  • artifacts/parser_validation/diversity/algorithm_diversity_summary.csv
  • artifacts/parser_validation/diversity/algorithm_signals_per_instance.csv

Tests

Fast default tests:

PYTHONPATH=. pytest

Model-backed tests:

make verify-model

PyPI Installation

For lightweight CLI/API use without cloning the repo:

python -m pip install --upgrade pip
python -m pip install --upgrade c2q-framework

Optional extras:

python -m pip install --upgrade "c2q-framework[parser]"
python -m pip install --upgrade "c2q-framework[recommender]"
python -m pip install --upgrade "c2q-framework[artifact]"
python -m pip install --upgrade "c2q-framework[cloud]"

Use them as follows:

  • parser: local parser model support
  • recommender: CSV export and experiment helpers
  • artifact: paper-backed local artifact path from a source checkout
  • cloud: optional live-provider SDK integrations

Check the installed version:

python -m pip show c2q-framework

CLI help:

c2q-json -h

Programming Interface

Current import namespace is src.*.

JSON DSL from Python:

from src.json_engine import load_input, normalise_task

task = load_input("min_add.json")
family, instance, params, goal = normalise_task(task)
print(family, instance)

Parser usage:

from src.parser.parser import Parser

parser = Parser(model_path="/path/to/saved_models_2025_12")
family, data = parser.parse("def add(a,b):\n    return a+b\n")
print(family, type(data).__name__)

The parser API requires the parser extra in PyPI installs.

Generate a report via Python API:

from src.graph import Graph
from src.problems.maximal_independent_set import MIS

edges = [[0, 1], [1, 2], [2, 3], [0, 3], [0, 2]]
problem = MIS(Graph(edges).G)
problem.report_latex(output_path="API_demo_report")

JSON DSL CLI Example

Repository example:

c2q-json --input src/c2q-dataset/inputs/json/mis/mis_04.json

This command parses the JSON problem, generates the quantum workflow, and writes a PDF report.

Regenerate the maintained JSON DSL example set under src/c2q-dataset/inputs/json_dsl/:

make json-dsl-examples

Generate PDF reports for a curated smoke subset of those JSON DSL examples:

make reproduce-json-smoke

The curated smoke subset currently includes one example each for ADD, Factor, MaxCut, and MIS.

Generate PDF reports for the full JSON DSL example set:

make reproduce-json-full

Outputs are written to:

  • smoke: artifacts/reproduce/json/smoke/
  • full: artifacts/reproduce/json/full/

The full JSON reproduction path is intentionally not run by default here because it is slow and takes roughly 2 hours.

Architecture

Framework Overview

Detailed component diagrams are available in src/assets/classiq_flow.pdf.

Contact

For research collaboration or substantial contributions:

License

This project is licensed under the Apache 2.0 License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

c2q_framework-0.1.2.tar.gz (211.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

c2q_framework-0.1.2-py3-none-any.whl (228.0 kB view details)

Uploaded Python 3

File details

Details for the file c2q_framework-0.1.2.tar.gz.

File metadata

  • Download URL: c2q_framework-0.1.2.tar.gz
  • Upload date:
  • Size: 211.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for c2q_framework-0.1.2.tar.gz
Algorithm Hash digest
SHA256 0da0d6e0f7ff7a2c7542b9019b3c6bbab6f674087d530967ca7dadb4503b7c53
MD5 5c274d74b60488a5d819fc00eac6d5a2
BLAKE2b-256 f2be84ace84100cabcc1ffbfe0add91dd1a5111296b735d1b6fcb6a9bf1d493a

See more details on using hashes here.

File details

Details for the file c2q_framework-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: c2q_framework-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 228.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for c2q_framework-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 5f452bcd6247ef3e693fad4c369214e260971299e9549e8f3f3e8b03cd9eae5c
MD5 535ae3f9ab83f33e4e1deb8d67b97cf0
BLAKE2b-256 e3cb3c3d390ca4013e5d5f433e1e430389953f4540a09058df6f2b145c697c1e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page