Skip to main content

Source code quality evaluation platform powered by AI

Project description

Quodeq

The quality code compass

Your guide to drive any codebase to excellence.

v1.0.0

Quodeq scans any codebase with AI and scores it across six quality dimensions -- Security, Reliability, Maintainability, Performance, Flexibility, and Usability -- based on ISO 25010. Get grades, find violations, fix what matters.

Watch the demo · Website · Releases


Why now? AI models can now autonomously find and exploit zero-day vulnerabilities across operating systems, browsers, and web applications. Thousands of previously unknown flaws have been uncovered in weeks, not years. The code you ship today will be read by models that can spot what humans miss. If your codebase carries security debt, reliability gaps, or maintenance shortcuts, the window to fix them is shrinking fast. Quodeq helps you prepare your software -- find what's wrong, enforce quality standards, and harden your code before the next generation of models is used against it.


Getting Started

pipx install quodeq    # Install quodeq
quodeq dashboard       # Launch the dashboard

That's it. The dashboard lets you point to any project and run evaluations from the UI.

Also available via pip install quodeq, or download the macOS DMG / Windows installer from Releases.

Requirements

Dependency Version
Python 3.12+ Runtime (brew install python or download)
Node.js 18+ Dashboard UI (brew install node or download)

AI Providers

Quodeq works with local models and cloud AI CLIs. Choose what fits your workflow:

Local models (free, private, your code never leaves your machine)

Provider Setup
Ollama ollama pull gemma3:27b -- then select Ollama in Settings

Cloud CLI providers (faster, deeper analysis)

Provider Setup
Claude Code npm i -g @anthropic-ai/claude-code
Codex CLI npm i -g @openai/codex
Gemini CLI npm i -g @anthropic-ai/gemini-cli

After installing a CLI provider, go to Settings in the dashboard and select it. Quodeq auto-detects installed providers.

Tip: For local models, gemma3:27b offers an excellent quality-to-cost ratio. For cloud, Claude Sonnet gives the best balance of speed, quality, and cost.


Dashboard

The Quodeq Dashboard is the main way to use Quodeq. Launch evaluations, browse results, and track quality over time.

quodeq dashboard

Quodeq Dashboard

Opens at http://localhost:4173 with:

  • Overall grade and score -- A-F letter grade, numeric score /10, trend across runs
  • Dimension breakdown -- individual scores per quality dimension with severity counts
  • Violations explorer -- drill into findings by file, principle, or CWE classification
  • Code map -- visual heatmap of your codebase showing where issues concentrate
  • Top offending files -- ranked list of where to focus remediation
  • Run history -- track how your codebase evolves over time
  • Custom standards -- create your own evaluation dimensions or import from the library

Click any dimension, file, or principle to explore the details. Dismiss false positives directly from the UI. Dismissed findings are excluded from future evaluations.

QuodeqBar (macOS)

A native menu bar companion app to manage the dashboard. Start/stop the server, see evaluation status at a glance, and open the dashboard in one click.

Download the DMG from Releases. Since it's not yet signed, on first launch right-click the app, then click Open in the dialog.

CLI usage

You can also run evaluations directly from the terminal:

quodeq evaluate /path/to/project
quodeq evaluate /path/to/project --scope src/api    # Scoped to a subdirectory
quodeq evaluate /path/to/project -d security        # Single dimension

Run quodeq evaluate --help and quodeq dashboard --help for all available options.


How It Works

  1. Detect -- identifies the languages and structure of the codebase
  2. Analyze -- sends an AI agent with read-only tools to explore the code
  3. Collect -- findings stream as structured JSONL via tool calls
  4. Score -- maps findings to ISO 25010 principles with CWE classifications
  5. Report -- produces per-dimension reports with grades, violations, and compliance

Results are stored in ~/.quodeq/evaluations/ and persist across sessions.

Standards

By default, Quodeq evaluates six quality dimensions based on ISO 25010: Security, Reliability, Maintainability, Performance, Flexibility, and Usability.

It also ships with additional built-in standards:

  • Clean Architecture -- dependency rules, layer boundaries, separation of concerns
  • Domain-Driven Design -- bounded contexts, aggregates, ubiquitous language

You can create your own standards from the dashboard, or ask any AI to generate one as a .json file and import it. See the Help tab in the dashboard for the full schema.

Supported Languages

Quodeq can evaluate any codebase in any language. The AI analysis engine reads and understands code regardless of the tech stack.


The Q² Scoring Formula

Quodeq scores each principle on a 0-10 scale using four independent constraints:

  1. Violation Base -- hyperbolic curve where the first violations hurt most (10 / (1 + K * weighted_violations))
  2. Compliance Lift -- evidence of good practices fills the gap between the base and 10
  3. Violation Ceiling -- log-based cap prevents compliance from overriding significant violations
  4. Severity Grade Floor -- grade labels match reality (only critical violations can produce a "Critical" grade)

The final score: max(floor, min(ceiling, base + (10 - base) * lift))

Full details in src/quodeq/core/scoring/README.md.

Development

git clone https://github.com/quodeq/quodeq.git && cd quodeq
uv sync
uv run pytest

Built with Claude Code

Development powered by Claude Code from Anthropic.

Changelog

See CHANGELOG.md for release history.

License

See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

quodeq-1.0.0.tar.gz (590.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

quodeq-1.0.0-py3-none-any.whl (828.1 kB view details)

Uploaded Python 3

File details

Details for the file quodeq-1.0.0.tar.gz.

File metadata

  • Download URL: quodeq-1.0.0.tar.gz
  • Upload date:
  • Size: 590.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for quodeq-1.0.0.tar.gz
Algorithm Hash digest
SHA256 984ef22a4459b83752a9c2dba3438906faf8e28f88391d62da95c7d1754cd939
MD5 545730f62cca31503f1c7573f5bda2e7
BLAKE2b-256 83a9ae15dddb42d6dbbaefa5d1a1ca982654d048e0d525e582ecd6735bb929d9

See more details on using hashes here.

Provenance

The following attestation bundles were made for quodeq-1.0.0.tar.gz:

Publisher: publish.yml on quodeq/quodeq

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file quodeq-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: quodeq-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 828.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for quodeq-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a21845c8008146a65424f1cf182f737aaea1d56530de35f5e6d73b927c7a766d
MD5 b65ccccbd9f130568a17215fb8bf9385
BLAKE2b-256 3da5f824f10c33eb7891a5a08a57aa69370c5af782eebf640d68bad367869dd4

See more details on using hashes here.

Provenance

The following attestation bundles were made for quodeq-1.0.0-py3-none-any.whl:

Publisher: publish.yml on quodeq/quodeq

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page