Skip to main content

Source code quality evaluation platform powered by AI

Project description

Quodeq

The quality code compass

Your guide to drive any codebase to excellence.

v1.0.0

Quodeq scans any codebase with AI and scores it across six quality dimensions -- Security, Reliability, Maintainability, Performance, Flexibility, and Usability -- based on ISO 25010. Get grades, find violations, fix what matters.

Watch the demo · Website · Releases


Why now? AI models can now autonomously find and exploit zero-day vulnerabilities across operating systems, browsers, and web applications. Thousands of previously unknown flaws have been uncovered in weeks, not years. The code you ship today will be read by models that can spot what humans miss. If your codebase carries security debt, reliability gaps, or maintenance shortcuts, the window to fix them is shrinking fast. Quodeq helps you prepare your software -- find what's wrong, enforce quality standards, and harden your code before the next generation of models is used against it.


Getting Started

pipx install quodeq    # Install quodeq
quodeq dashboard       # Launch the dashboard

That's it. The dashboard lets you point to any project and run evaluations from the UI.

Also available via pip install quodeq, or download the macOS DMG / Windows installer from Releases.

Requirements

Dependency Version
Python 3.12+ Runtime (brew install python or download)
Node.js 18+ Dashboard UI (brew install node or download)

AI Providers

Quodeq works with local models and cloud AI CLIs. Choose what fits your workflow:

Local models (free, private, your code never leaves your machine)

Provider Setup
Ollama ollama pull gemma3:27b -- then select Ollama in Settings

Cloud CLI providers (faster, deeper analysis)

Provider Setup
Claude Code npm i -g @anthropic-ai/claude-code
Codex CLI npm i -g @openai/codex
Gemini CLI npm i -g @anthropic-ai/gemini-cli

After installing a CLI provider, go to Settings in the dashboard and select it. Quodeq auto-detects installed providers.

Tip: For local models, gemma3:27b offers an excellent quality-to-cost ratio. For cloud, Claude Sonnet gives the best balance of speed, quality, and cost.


Dashboard

The Quodeq Dashboard is the main way to use Quodeq. Launch evaluations, browse results, and track quality over time.

quodeq dashboard

Quodeq Dashboard

Opens at http://localhost:4173 with:

  • Overall grade and score -- A-F letter grade, numeric score /10, trend across runs
  • Dimension breakdown -- individual scores per quality dimension with severity counts
  • Violations explorer -- drill into findings by file, principle, or CWE classification
  • Code map -- visual heatmap of your codebase showing where issues concentrate
  • Top offending files -- ranked list of where to focus remediation
  • Run history -- track how your codebase evolves over time
  • Custom standards -- create your own evaluation dimensions or import from the library

Click any dimension, file, or principle to explore the details. Dismiss false positives directly from the UI. Dismissed findings are excluded from future evaluations.

QuodeqBar (macOS)

A native menu bar companion app to manage the dashboard. Start/stop the server, see evaluation status at a glance, and open the dashboard in one click.

Download the DMG from Releases. Since it's not yet signed, on first launch right-click the app, then click Open in the dialog.

CLI usage

You can also run evaluations directly from the terminal:

quodeq evaluate /path/to/project
quodeq evaluate /path/to/project --scope src/api    # Scoped to a subdirectory
quodeq evaluate /path/to/project -d security        # Single dimension

Run quodeq evaluate --help and quodeq dashboard --help for all available options.


How It Works

  1. Detect -- identifies the languages and structure of the codebase
  2. Analyze -- sends an AI agent with read-only tools to explore the code
  3. Collect -- findings stream as structured JSONL via tool calls
  4. Score -- maps findings to ISO 25010 principles with CWE classifications
  5. Report -- produces per-dimension reports with grades, violations, and compliance

Results are stored in ~/.quodeq/evaluations/ and persist across sessions.

Standards

By default, Quodeq evaluates six quality dimensions based on ISO 25010: Security, Reliability, Maintainability, Performance, Flexibility, and Usability.

It also ships with additional built-in standards:

  • Clean Architecture -- dependency rules, layer boundaries, separation of concerns
  • Domain-Driven Design -- bounded contexts, aggregates, ubiquitous language

You can create your own standards from the dashboard, or ask any AI to generate one as a .json file and import it. See the Help tab in the dashboard for the full schema.

Supported Languages

Quodeq can evaluate any codebase in any language. The AI analysis engine reads and understands code regardless of the tech stack.


The Q² Scoring Formula

Quodeq scores each principle on a 0-10 scale using four independent constraints:

  1. Violation Base -- hyperbolic curve where the first violations hurt most (10 / (1 + K * weighted_violations))
  2. Compliance Lift -- evidence of good practices fills the gap between the base and 10
  3. Violation Ceiling -- log-based cap prevents compliance from overriding significant violations
  4. Severity Grade Floor -- grade labels match reality (only critical violations can produce a "Critical" grade)

The final score: max(floor, min(ceiling, base + (10 - base) * lift))

Full details in src/quodeq/core/scoring/README.md.

Development

git clone https://github.com/quodeq/quodeq.git && cd quodeq
uv sync
uv run pytest

Built with Claude Code

Development powered by Claude Code from Anthropic.

Changelog

See CHANGELOG.md for release history.

License

See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

quodeq-1.0.3.tar.gz (26.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

quodeq-1.0.3-py3-none-any.whl (30.0 MB view details)

Uploaded Python 3

File details

Details for the file quodeq-1.0.3.tar.gz.

File metadata

  • Download URL: quodeq-1.0.3.tar.gz
  • Upload date:
  • Size: 26.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for quodeq-1.0.3.tar.gz
Algorithm Hash digest
SHA256 4152dc07aa8c5ba0bddf7892df772c422d39e03fb48a306971dd69cc5b74d1fc
MD5 5c7c48523b663c0d4226d380f51deac4
BLAKE2b-256 6bec9527c1342efc1be7809baf34b537904284e8c3cd0394f59f55d6e74148c4

See more details on using hashes here.

Provenance

The following attestation bundles were made for quodeq-1.0.3.tar.gz:

Publisher: publish.yml on quodeq/quodeq

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file quodeq-1.0.3-py3-none-any.whl.

File metadata

  • Download URL: quodeq-1.0.3-py3-none-any.whl
  • Upload date:
  • Size: 30.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for quodeq-1.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 4fc567b689371ca43dab5ebd14717fb89092117a1f3f50dff887086e04c21174
MD5 1dba540c8c68a954da70d8e759b844d6
BLAKE2b-256 53ef646d45539d36aa2fec4cf013a31a12f6b5bad89641e014590bf86cf420b8

See more details on using hashes here.

Provenance

The following attestation bundles were made for quodeq-1.0.3-py3-none-any.whl:

Publisher: publish.yml on quodeq/quodeq

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page