Skip to main content

Benchmark evaluation for widget code generation — 12 quality metrics across layout, legibility, perceptual, style, and geometry.

Project description

widget2code-bench

Benchmark evaluation for widget code generation — 12 quality metrics across layout, legibility, perceptual, style, and geometry.

Installation

# 1. Install PyTorch with CUDA support first (skip if CPU-only)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu126

# 2. Install widget2code-bench
pip install widget2code-bench

Note: PyPI only ships CPU-only PyTorch. To use --cuda, you must install PyTorch from the official index before installing this package.

Usage

Single image mode

Evaluate one GT-prediction pair. Prints JSON results to stdout, no files saved.

widget2code-bench \
  --gt_image /path/to/gt.png \
  --pred_image /path/to/pred.png \
  --cuda

Batch mode

Evaluate all matched pairs in directories.

widget2code-bench \
  --gt_dir /path/to/GT \ # /shared/zhixiang_team/widget_research/Comparison/GT
  --pred_dir /path/to/predictions \
  --pred_name output.png \
  --cuda

Directory Structure (batch mode)

  • GT dir: flat image files with 4-digit IDs in filenames (e.g. gt_0001.png)
  • Pred dir: subfolders with 4-digit IDs in names, each containing --pred_name file
gt_dir/                     pred_dir/
  gt_0001.png                 image_0001/
  gt_0002.png                   output.png
  ...                         image_0002/
                                output.png

Options

Flag Default Description
--gt_image Single GT image path
--pred_image Single prediction image path
--gt_dir GT directory (flat image files)
--pred_dir Prediction directory (subfolders)
--pred_name output.png Prediction filename inside each subfolder
--output_dir {pred_dir}/.analysis Statistics output directory
--workers 4 Parallel threads
--cuda off Enable GPU
--skip_eval off Skip evaluation, only regenerate statistics xlsx files from existing evaluation.json
--minimal off Skip per-metric visualization PNGs (default: verbose with viz)

Output (batch mode)

Per-sample outputs

Every matched pair writes one evaluation.json plus (by default) a full per-metric visualization set into its sample folder:

<pred_dir>/
  image_0001/
    output.png
    evaluation/
      evaluation.json                 # 12 metrics
      viz/
        MarginAsymmetry.png
        ContentAspectDiff.png
        AreaRatioDiff.png
        TextJaccard.png
        ContrastDiff.png
        ContrastLocalDiff.png
        PaletteDistance.png
        Vibrancy.png
        PolarityConsistency.png
        ssim.png
        lp.png
        geo_score.png

Each viz PNG shows left/middle = GT/Pred intermediates and right = formula + intermediate values + final score, so you can see exactly how the metric was computed.

Pass --minimal to skip the viz/ directory (much faster, ~10x less disk).

Missing-prediction handling

The evaluator always produces all four fill modes. When a GT image has no matching prediction:

  • Existing subfolder, pred missing → fill results go in the same folder's evaluation/
  • No subfolder at all → evaluator creates pred_dir/fill_<id>/evaluation/

In either case it writes:

evaluation/
  evaluation_black.json   # GT vs all-black image
  evaluation_white.json   # GT vs all-white image

zero fill isn't a per-sample file — it's a worst-case contribution (LPIPS = 1.0, others = 0) used only when aggregating the combined summary.

Aggregate outputs (.analysis/)

<pred_dir>/.analysis/
  metrics_stats.json                 # per-metric quartiles/mean/std over matched pairs
  metrics.xlsx                       # 4-row combined summary (raw/black/white/zero)
  raw/<run>-raw-<ver>.xlsx           # single-row summary per mode
  black/<run>-black-<ver>.xlsx
  white/<run>-white-<ver>.xlsx
  zero/<run>-zero-<ver>.xlsx
Mode Description
raw Matched pairs only (missing skipped)
black Missing preds scored against an all-black image
white Missing preds scored against an all-white image
zero Missing preds contribute the worst-case value (LPIPS = 1.0, others = 0)

All numeric values are rounded to 2 decimals. Combined metrics.xlsx has a two-level header grouping metrics by category (Layout / Legibility / Style / Perceptual / Geometry) plus SuccessRate (ratio, count). Per-mode xlsx uses flat single-level headers.

All metrics are higher-is-better except lp (LPIPS), which is a distance (lower-is-better).

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

widget2code_bench_exp-0.2.7.tar.gz (28.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

widget2code_bench_exp-0.2.7-py3-none-any.whl (32.3 kB view details)

Uploaded Python 3

File details

Details for the file widget2code_bench_exp-0.2.7.tar.gz.

File metadata

  • Download URL: widget2code_bench_exp-0.2.7.tar.gz
  • Upload date:
  • Size: 28.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for widget2code_bench_exp-0.2.7.tar.gz
Algorithm Hash digest
SHA256 0771627d2c846793dd86984020038ab40321e34daf18b1a1b26f8a30fa778c53
MD5 b070b4dd4174549fe8f6708e55054648
BLAKE2b-256 27151fb38317e02f5b4185588daf27f1f40f3e35e8e987d383d1ee85f4edf695

See more details on using hashes here.

File details

Details for the file widget2code_bench_exp-0.2.7-py3-none-any.whl.

File metadata

File hashes

Hashes for widget2code_bench_exp-0.2.7-py3-none-any.whl
Algorithm Hash digest
SHA256 c91287a7c2e0916af2c59b83c932330173e5635656d391bbc97a5bf1f67f9f53
MD5 c6404891f43105317a60e41bde225f50
BLAKE2b-256 d236cde57e4f05f91c6714d061bd726b4457e6441babf9ca6223bcc662df18b3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page