Skip to main content

Architecture-aware repo-to-context scaffold for Qwen3 and Qwen3.6

Project description

Qwen3-repo

Architecture-aware repo-to-context scaffold for Qwen3 and Qwen3.6. Ingests a GitHub repository into a dependency-ordered context pack, runs an agentic coding loop via any OpenAI-compatible endpoint, and includes NL2Repo / SWE-bench evaluation runners.

Why ordering matters

Qwen3.6's hybrid architecture (Gated DeltaNet + Gated Attention) processes three out of four layers with linear attention and a fixed-size recurrent state. Placing definitions before their dependents should help the recurrent state accumulate core types and interfaces before call sites reference them.

Standard Qwen3 models use full attention and can look back to any position, so ordering matters less -- but dependency-aware packing still avoids wasting context on low-value files and keeps related code together.

Qwen's NL2Repo evaluations (score: 36.2) were run via Claude Code (temp=1.0, top_p=0.95, max_turns=900). This scaffold provides an open-source alternative with architecture-aware context formatting.

Installation

pip install qwen3-repo

Quick start

Ingest a repository

from qwen3_repo import ingest_repo

# Works with any supported Qwen3 or Qwen3.6 model
context_pack, files, budget = ingest_repo(
    "https://github.com/user/repo",
    model="Qwen3-32B",
)
print(f"{len(files)} files, ~{budget.pack_budget:,} token budget")

Rank files by importance

from qwen3_repo import rank_files
from qwen3_repo.ingester import discover_files
from pathlib import Path

files = discover_files(Path("/path/to/repo"))
ranked = rank_files(files)
for r in ranked[:10]:
    print(f"{r.score:.2f}  {r.path}")

Detect vision encoder needs

from qwen3_repo import detect_vision_needs
from pathlib import Path

# Vision encoder is only relevant for Qwen3.6 (Qwen3 has no vision)
result = detect_vision_needs(Path("/path/to/repo"))
print(result["recommendation"])
# "Consider --language-model-only. No high-relevance visual assets detected. Frees ~6 GB KV cache (~100K-150K additional context tokens)."

Run the agentic scaffold

# Start a vLLM server first:
# vllm serve Qwen/Qwen3-32B --port 8000

python -m qwen3_repo.scaffold \
    --repo-path /path/to/repo \
    --task "Fix the failing test in test_auth.py" \
    --max-turns 900 \
    --temperature 1.0 \
    --top-p 0.95

Run NL2Repo evaluation

python -m qwen3_repo.eval.nl2repo \
    --tasks nl2repo_tasks.json \
    --api-url http://localhost:8000/v1 \
    --model Qwen/Qwen3.6-27B \
    --output-dir nl2repo_results

Compare Claude Code vs qwen3-repo

python -m qwen3_repo.bench_compare \
    --tasks comparison_tasks.json \
    --repo-path /path/to/repo \
    --markdown

Context ordering strategy

  1. Role-based grouping: CONFIG -> TYPE_DEF -> CORE_LIB -> UTILITY -> FEATURE -> TEST -> DOC -> BUILD
  2. Dependency-aware ordering: Topological sort within each group (Kahn's algorithm, importance as tiebreaker)
  3. Budget trimming: Lowest-importance files dropped first; tests and docs trimmed before core code

Importance scoring

The ingestion pipeline (ingest_repo) uses these signals to order files within each role group:

Signal Weight Description
Centrality 3.0 How many files import this file (linear-scaled)
Role weight 2.0 TYPE_DEF > CONFIG > CORE_LIB > FEATURE > TEST > DOC
Recency 1.0 Inverse linear decay from git last-modified (30-day scale)
Size penalty 0.8 Flat penalty for files over 50K tokens
Coverage bonus 0.5 Boost if corresponding test files exist

The standalone rank_files() utility uses a separate scoring system with six signals (centrality, recency, coverage, role weight, structural depth, size efficiency) and customizable weights. See ranker.py for details.

Vision encoder decision (Qwen3.6 only)

Qwen3 models have no vision encoder. For Qwen3.6:

Repo contents Vision encoder Reason
No images Disabled (--language-model-only) Frees ~6 GB KV cache
Design mockups/screenshots Enabled Model needs to see design intent
SVG diagrams Enabled Vision helps with SVG understanding
Canvas/WebGL code Enabled Vision helps understand visual output
Only icons/favicons Disabled Low relevance, not worth KV cost

Supported models

Qwen3 (standard transformer, full attention)

Model Context
Qwen3-235B-A22B 32K native, 128K extended
Qwen3-32B 32K native, 128K extended
Qwen3-30B-A3B 32K native, 128K extended
Qwen3-14B 32K native, 128K extended
Qwen3-8B 32K native, 128K extended
Qwen3-4B 32K native
Qwen3-1.7B 32K native
Qwen3-0.6B 32K native

Qwen3.6 (hybrid Gated DeltaNet + Gated Attention)

Model Layers Layout Context
Qwen3.6-27B 64 16 x (3 x GDN + 1 x GA) 262K native, 1M extended
Qwen3.6-35B-A3B 40 10 x (3 x GDN + 1 x GA), MoE 262K native

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qwen3_repo-0.1.2.tar.gz (37.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

qwen3_repo-0.1.2-py3-none-any.whl (39.1 kB view details)

Uploaded Python 3

File details

Details for the file qwen3_repo-0.1.2.tar.gz.

File metadata

  • Download URL: qwen3_repo-0.1.2.tar.gz
  • Upload date:
  • Size: 37.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for qwen3_repo-0.1.2.tar.gz
Algorithm Hash digest
SHA256 260d84dfd8680ece6f9b2ee47794d9700c117826395acae16e21216cf9c6ca4d
MD5 99150fc66e74f03ecfa184de766d2fbf
BLAKE2b-256 e3a141d1ca644fd800e629a1627dc2e4be1de13d6d7fec2bd403a05422acfb5e

See more details on using hashes here.

Provenance

The following attestation bundles were made for qwen3_repo-0.1.2.tar.gz:

Publisher: publish.yml on ArkaD171717/Qwen3-Repo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file qwen3_repo-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: qwen3_repo-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 39.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for qwen3_repo-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 cf0bf7ac25e58b793ef280df41a1776def6e60565d888b1b35e37e725955ee5e
MD5 9d91e941b6dfe5511dc81b6f1ecfb53a
BLAKE2b-256 b900c222b931cc079bfe25c0fcebec27e587e2c2e3382f9957f3b465c47e8194

See more details on using hashes here.

Provenance

The following attestation bundles were made for qwen3_repo-0.1.2-py3-none-any.whl:

Publisher: publish.yml on ArkaD171717/Qwen3-Repo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page