Skip to main content

No project description provided

Project description

modverif

build codecov PyPI version


Documentation: https://mullenkamp.github.io/modverif/

Source Code: https://github.com/mullenkamp/modverif


A Python package for evaluating multidimensional model output, following MET/METplus standards for meteorological verification. All data I/O uses the cfdb format.

Features

Grid-to-Grid Evaluation (Evaluator)

Compare two gridded model runs (e.g., WRF outputs):

  • Cell-level metrics: NE, ANE, RSE, Bias, MAE, POD, FAR, CSI, GSS, Frequency Bias
  • Domain-aggregated metrics: NE, ANE, RMSE, Bias, Pearson correlation, POD, FAR, CSI, GSS, Frequency Bias
  • Fractions Skill Score (FSS): Multi-scale spatial verification for precipitation and other threshold-based fields
  • Vector wind metrics: Vector RMSE, wind speed bias, wind direction bias from U/V components
  • Diurnal cycle analysis: Metrics grouped by hour-of-day
  • Spatial subsetting: Bounding box or 2D boolean mask
  • Time filtering: Start/end time bounds

Grid-to-Point Evaluation (StationEvaluator)

Compare gridded model output to weather station observations:

  • Automatic grid-to-point interpolation via cfdb's GridInterp.to_points()
  • Per-station, per-timestep metrics: Bias, MAE, NE, ANE
  • Per-station aggregated metrics: RMSE, Pearson correlation
  • Station-aggregated summary statistics
  • Height level matching (single-level and multi-level observations)
  • Vector wind evaluation at station locations
  • Diurnal cycle analysis per station

Cyclone Evaluation

Track cyclones independently in two datasets and compare:

  • Cyclone tracking via SLP pressure minimum
  • Track position, pressure, and radius differences
  • Per-variable metrics within the cyclone region

Verification Plots

Publication-quality plots following MET/METplus conventions:

  • Scatter plot: Model vs observed with 1:1 line, statistics box, density option
  • Station map: Geographic map of station metric values (cartopy optional)
  • Time series: Model/observation comparison over time
  • Performance diagram: POD vs Success Ratio with CSI contours and bias lines (Roebber 2009)
  • Taylor diagram: Standard deviation, correlation, and centered RMSE (Taylor 2001)
  • Diurnal cycle: Hour-of-day metric comparison
  • FSS scale plot: Skill vs neighborhood size
  • Wind rose comparison: Side-by-side model/observed wind roses

Quick Start

from modverif import Evaluator, StationEvaluator

# Grid-to-grid evaluation
evaluator = Evaluator('source.cfdb', 'test.cfdb')
evaluator.evaluate_domain('output.cfdb', variables=['air_temperature'], metrics=['bias', 'rmse', 'pearson'])

# Grid-to-point evaluation
station_eval = StationEvaluator(
    'model.cfdb', 'stations.cfdb',
    variable_heights={'air_temperature': 2.0, 'wind_speed': 10.0},
)
station_eval.evaluate('station_output.cfdb', variables=['air_temperature'], metrics=['bias', 'rmse'])

# FSS evaluation
evaluator.evaluate_fss('fss_output.cfdb', variables=['precipitation'], threshold=1.0)

# Vector wind evaluation
evaluator.evaluate_wind('wind_output.cfdb', metrics=['vector_rmse', 'speed_bias'])

Convenience functions are also available:

from modverif.evaluate import (
    evaluate_models_cell,
    evaluate_models_domain,
    evaluate_stations,
    evaluate_fss,
    evaluate_wind,
)

Plotting

from modverif.plots import plot_scatter, plot_station_map, plot_performance_diagram

plot_scatter(model_values, obs_values, save_path='scatter.png', variable_name='Temperature', units='K')
plot_station_map(lons, lats, bias_values, save_path='map.png', metric_name='Bias')
plot_performance_diagram([0.85, 0.72], [0.15, 0.28], labels=['WRF-A', 'WRF-B'])

Installation

pip install modverif

Or with UV:

uv add modverif

Dependencies

  • Python >= 3.10
  • cfdb, numpy, scipy, matplotlib, pyproj
  • cartopy (optional, for geographic map projections)

License

This project is licensed under the terms of the Apache Software License 2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

modverif-0.2.3.tar.gz (48.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

modverif-0.2.3-py3-none-any.whl (52.3 kB view details)

Uploaded Python 3

File details

Details for the file modverif-0.2.3.tar.gz.

File metadata

  • Download URL: modverif-0.2.3.tar.gz
  • Upload date:
  • Size: 48.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.7

File hashes

Hashes for modverif-0.2.3.tar.gz
Algorithm Hash digest
SHA256 6beaf478ac08e69ab82d8e786fc45511004e2788cc6e7804d1ffc5f1efef7dec
MD5 d8a4d91e2eace60d28aa884b4ea77adb
BLAKE2b-256 fbffe4071fea3ef20410cc4fdb487ed43e1f4c557ef45e440d4889c7a4759825

See more details on using hashes here.

File details

Details for the file modverif-0.2.3-py3-none-any.whl.

File metadata

  • Download URL: modverif-0.2.3-py3-none-any.whl
  • Upload date:
  • Size: 52.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.7

File hashes

Hashes for modverif-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a2befcce2891b15606b678238bf409031b75e038637f7af3d0ed9f20e7ac38e4
MD5 8e7ed19e2eb3d44b1c61dee603f961f8
BLAKE2b-256 88d66df73179c990b497b9c9b0d094d0f9e1d0c482d90e6dea6a71075d9e7bf3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page