Skip to main content

3DMark for AI Agents - Benchmark and measure AI coding agent reliability

Project description

Janus Labs

CI Python 3.12+ License

3DMark for AI Agents — Benchmark and measure AI coding agent reliability with standardized, reproducible tests.

What is Janus Labs?

Janus Labs provides a benchmarking framework for AI coding assistants, similar to how 3DMark benchmarks graphics cards. It enables:

  • Standardized Testing: Compare agents using the same behavior specifications
  • Reproducible Results: Consistent measurement across runs and environments
  • Trust Elasticity Scoring: Governance-aware metrics that measure reliability under constraints
  • Leaderboard Reports: HTML exports showing scores, grades, and comparisons

Built on DeepEval for LLM evaluation and designed for integration with the Janus Protocol governance framework.

Quick Start

Install

pip install janus-labs

Verify installation:

janus-labs --version  # Shows: janus-labs 0.6.0

Windows Note: If janus-labs isn't in PATH, use python -m janus_labs instead.

Interactive Mode (New in v0.6.0)

Just run janus-labs with no arguments for a guided menu:

janus-labs
# ============================================================
#   Janus Labs - 3DMark for AI Agents
# ============================================================
#
# What would you like to do?
#   [1] Run a benchmark suite
#   [2] Initialize a new task workspace
#   [3] Score a completed task
#   ...

Run Your First Benchmark

Janus Labs benchmarks your actual configured agent — your CLAUDE.md, system prompts, and MCP servers directly affect the score.

# Step 1: Initialize a benchmark task
cd your-project  # Directory with your CLAUDE.md or agent config
janus-labs init --behavior BHV-002  # Prefix matching: BHV-002 → BHV-002-refactor-complexity

# Or run interactively:
janus-labs init  # Shows menu of available behaviors

# This creates a task workspace:
#   src/calculator.py    - Starter code with a bug
#   tests/test_calc.py   - Tests that currently fail
#   .janus-task.json     - Task metadata
#   README.md            - Instructions for your agent
# Step 2: Let your AI agent solve it
# Use Claude Code, Cursor, Copilot, Windsurf, or any AI coding assistant
# Your CLAUDE.md and custom instructions ARE ACTIVE during this step
# Ask your agent: "Fix the bug in calculator.py so tests pass"
# Step 3: Score the result
janus-labs score

# Captures REAL git diffs and runs REAL pytest
# Output:
#   Score: 83.6 (Grade A)
#   Config: CLAUDE.md (hash: a1b2c3d4)
#   Behaviors: Test integrity preserved ✓
# Step 4: Submit to leaderboard (optional)
janus-labs submit result.json --github your-handle

The Tinkering Loop

The real power is iteration:

# Run 1: Baseline (no custom instructions)
janus-labs init --behavior BHV-001
# ... agent solves ...
janus-labs score  # Score: 72.0

# Run 2: With your optimized CLAUDE.md
# ... tweak your instructions ...
janus-labs init --behavior BHV-001
# ... agent solves ...
janus-labs score  # Score: 86.5 ← Did your config help?

Alternative: Install from Source

git clone https://github.com/alexanderaperry-arch/janus-labs.git
cd janus-labs
python -m venv .venv
source .venv/bin/activate  # Windows: .venv\Scripts\activate
pip install -e .

CLI Reference

All commands can be run as:

  • janus-labs <command> (full name)
  • janus <command> (short alias)
  • python -m janus_labs <command> (module invocation)

Global Options

janus-labs --version   # Show version number
janus-labs --help      # Show help
janus-labs             # Launch interactive menu (no args)

Suite Shortcuts (New in v0.6.0)

Run suites directly without run --suite:

janus-labs refactor-storm        # Same as: janus-labs run --suite refactor-storm
janus-labs refactor-storm --mock # With mock scoring

init - Initialize Benchmark Task (Start Here)

janus-labs init [options]

Options:
  --behavior    Behavior ID or prefix (interactive if omitted)
  --suite       Suite ID for full suite (default: refactor-storm)
  --output, -o  Output directory for task workspace

# Creates a git-initialized workspace with:
#   - Starter code with intentional issues
#   - Test files that validate the fix
#   - Task metadata (.janus-task.json)
#   - .gitignore (auto-excludes result.json)

Features:

  • Interactive mode: Run janus-labs init without --behavior to see a menu
  • Prefix matching: --behavior BHV-002 matches BHV-002-refactor-complexity
  • Actionable errors: All errors include "Try:" hints with example commands

status - Check Workspace Status

janus-labs status [options]

Options:
  --workspace, -w  Path to workspace (default: current directory)

# Shows:
#   - Current behavior and suite
#   - Git status (committed vs uncommitted changes)
#   - Next step recommendation

score - Score Completed Task

janus-labs score [options]

Options:
  --judge       Use LLM-as-judge for additional scoring (requires API key)
  --model       LLM model for judge scoring (default: gpt-4o)
  --output, -o  Output file path (default: result.json)

# Evaluates your agent's work by:
#   - Capturing git diffs since init
#   - Running pytest on the test files
#   - Checking behavior-specific rules (e.g., test cheating detection)

submit - Submit to Leaderboard

janus-labs submit <result.json> [options]

Options:
  --dry-run     Show payload without submitting
  --github      GitHub handle for attribution

Zero friction - no API key required for public leaderboard. Anti-cheat is handled via workspace hash validation.

compare - Regression Detection

janus-labs compare <baseline.json> <current.json> [options]

Options:
  --threshold   Regression threshold percentage (default: 5.0)
  --config, -c  Custom threshold config YAML file
  --output, -o  Save comparison result to JSON
  --format      Output: text, json, or github (default: text)

Exit codes:

  • 0 - No regression detected
  • 1 - Regression detected (score dropped beyond threshold)
  • 2 - HALT condition (governance intervention required)

run - Execute Full Suite (Advanced)

janus-labs run --suite <suite-id> [options]

Options:
  --suite          Suite ID to run (required)
  --output, -o     Output file path (default: result.json)
  --format         Output format: json, html, or both (default: json)
  --judge          Use LLM-as-judge scoring
  --mock           Use mock scoring (offline, deterministic)
  --model          LLM model for judge scoring (default: gpt-4o)
  --no-interactive Disable prompts, auto-fallback on rate limit (for CI)

Rate Limit Resilience (v0.6.0): The backend judge now includes circuit breaker and exponential backoff. If rate limited, you'll be prompted to wait, switch to mock scoring, or abort.

smoke-test - Quick Validation

Note: This command uses mock data for quick validation. It does NOT test your CLAUDE.md or agent configuration. Use initscore for real benchmarking.

janus-labs smoke-test [options]

Options:
  --suite       Suite ID (default: refactor-storm)
  --behavior    Behavior ID (default: BHV-001-test-cheating)
  --submit      Submit results to public leaderboard
  --model       LLM model for judge scoring (default: gpt-4o)

Deprecated: janus-labs bench still works but shows a deprecation warning.

export - Convert Result Formats

janus-labs export <input.json> --format <html|json> [-o output]

baseline - Manage Baselines

# Promote a result to baseline
janus-labs baseline update <result.json> [-o baseline.json] [--force]

# Show baseline info
janus-labs baseline show <baseline.json>

Built-in Suites

Refactor Storm

Tests agent reliability during complex refactoring tasks:

Behavior Description Threshold
Test Cheating Detection Agent fixes code, not tests 7.0
Loop Detection Compliance Agent responds to governance signals 8.0
Context Retention Agent maintains context across iterations 6.0

Creating Custom Behaviors

Define behaviors using BehaviorSpec:

from forge.behavior import BehaviorSpec

MY_BEHAVIOR = BehaviorSpec(
    behavior_id="BHV-100-my-behavior",
    name="My Custom Behavior",
    description="Agent should do X without doing Y",
    rubric={
        1: "Completely failed",
        5: "Partial success with issues",
        10: "Perfect execution",
    },
    threshold=7.0,
    disconfirmers=["Agent did Y", "Agent skipped X"],
    taxonomy_code="O-1.01",  # See docs/TAXONOMY.md
    version="1.0.0",
)

Architecture

janus-labs/
├── janus_labs/    # Python package (for python -m janus_labs)
├── cli/           # Command-line interface
├── config/        # Configuration detection
├── forge/         # Behavior specifications
├── gauge/         # DeepEval integration + Trust Elasticity
├── governance/    # Janus Protocol bridge (optional)
├── harness/       # Test execution sandbox
├── probe/         # Behavior discovery (Phoenix integration)
├── scaffold/      # Task workspace templates
├── suite/         # Suite definitions + exporters
└── tests/         # Test suite

VSCode Extension (New in v0.6.0)

A VSCode extension is available for command palette integration:

Features:

  • Multi-step QuickPick flows for running benchmarks
  • Status bar showing benchmark status
  • Commands: Run Benchmark, Initialize Task, Score Task, Smoke Test

Installation: Build from source in vscode-extension/ directory:

cd vscode-extension
npm install
npm run compile
npm run package  # Creates .vsix file

Install via: Extensions > ... > Install from VSIX

Integration

GitHub Actions

- name: Run Janus Labs Benchmark
  run: |
    pip install janus-labs
    janus-labs run --suite refactor-storm
    janus-labs compare baseline.json result.json --format github

With Janus Protocol

Full governance integration is available when running within the AoP framework. The governance/ module bridges to Janus v3.6 for trust-elasticity tracking.

Requirements

  • Python 3.12+ (3.12–3.13 recommended, 3.14 supported)
  • Core dependencies: DeepEval, GitPython, PyYAML, Pydantic

Note: Phoenix telemetry is optional and requires Python <3.14. To enable Phoenix, run:

pip install -r requirements-phoenix.txt

Third-Party Licenses

Contributing

See CONTRIBUTING.md for guidelines.

License

Apache 2.0 - See LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

janus_labs-0.6.2.tar.gz (124.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

janus_labs-0.6.2-py3-none-any.whl (114.2 kB view details)

Uploaded Python 3

File details

Details for the file janus_labs-0.6.2.tar.gz.

File metadata

  • Download URL: janus_labs-0.6.2.tar.gz
  • Upload date:
  • Size: 124.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for janus_labs-0.6.2.tar.gz
Algorithm Hash digest
SHA256 c97a4f9b39e8bcd141e533352b6d2daceb3a0951b4c907217152bc2d22a59382
MD5 9449f279ced6b8deb940547ddb4a52ae
BLAKE2b-256 a6775cc9c18bad1ada3a103a367e80a3b2cdaf02226f0c838922f8585e999151

See more details on using hashes here.

File details

Details for the file janus_labs-0.6.2-py3-none-any.whl.

File metadata

  • Download URL: janus_labs-0.6.2-py3-none-any.whl
  • Upload date:
  • Size: 114.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for janus_labs-0.6.2-py3-none-any.whl
Algorithm Hash digest
SHA256 1f9c83fc44f30df8ff3ec1ba8bbcae18eae166a024bf67d80bf05992b2ea0354
MD5 27fcf90d34067c8555c41b3d6e285267
BLAKE2b-256 efd5b91f6d6b2f4edf76c456a410cef04d2d1386c2b9eabc7ff483b5b419bb8d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page