Compare results from simulations with observations.
Project description
ModelSkill: Flexible Model skill evaluation.
ModelSkill is a python package for scoring MIKE models (other models can be evaluated as well).
Read more about the vision and scope. Contribute with new ideas in the discussion, report an issue or browse the API documentation. Access observational data (e.g. altimetry data) from the sister library WatObs.
Use cases
ModelSkill would like to be your companion during the different phases of a MIKE modelling workflow.
- Model setup - exploratory phase
- Model calibration
- Model validation and reporting - communicate your final results
Installation
From pypi:
> pip install modelskill
Or the development version:
> pip install https://github.com/DHI/modelskill/archive/main.zip
Example notebooks
- Quick_and_dirty_compare.ipynb
- SW_DutchCoast.ipynb
- Multi_model_comparison.ipynb
- Multi_variable_comparison.ipynb
- Track_comparison.ipynb (including global wave model example)
- Spatial_skill.ipynb (satellite tracks, skill aggregated on spatial bins)
- NetCDF_ModelResult.ipynb
- Combine_comparers.ipynb
Workflow
- Define ModelResults
- Define Observations
- Connect Observations and ModelResults
- Extract ModelResults at Observation positions
- Do plotting, statistics, reporting using a Comparer
Read more about the workflow in the getting started guide.
Example of use
Start by defining model results and observations:
>>> from modelskill import ModelResult
>>> from modelskill import PointObservation, TrackObservation
>>> mr = ModelResult("HKZN_local_2017_DutchCoast.dfsu", name="HKZN_local", item=0)
>>> HKNA = PointObservation("HKNA_Hm0.dfs0", item=0, x=4.2420, y=52.6887, name="HKNA")
>>> EPL = PointObservation("eur_Hm0.dfs0", item=0, x=3.2760, y=51.9990, name="EPL")
>>> c2 = TrackObservation("Alti_c2_Dutch.dfs0", item=3, name="c2")
Then, connect observations and model results, and extract data at observation points:
>>> from modelskill import Connector
>>> con = Connector([HKNA, EPL, c2], mr)
>>> comparer = con.extract()
With the comparer, all sorts of skill assessments and plots can be made:
>>> comparer.skill().round(2)
n bias rmse urmse mae cc si r2
observation
HKNA 385 -0.20 0.35 0.29 0.25 0.97 0.09 0.99
EPL 66 -0.08 0.22 0.20 0.18 0.97 0.07 0.99
c2 113 -0.00 0.35 0.35 0.29 0.97 0.12 0.99
Overview of observation locations
con.plot_observation_positions(figsize=(7,7))
Scatter plot
comparer.scatter()
Timeseries plot
Timeseries plots can either be static and report-friendly (matplotlib) or interactive with zoom functionality (plotly).
comparer["HKNA"].plot_timeseries(width=1000, backend="plotly")
Automated reporting
With a few lines of code, it will be possible to generate an automated report.
from modelskill.report import Reporter
rep = Reporter(mr)
rep.to_markdown()
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file modelskill-1.0a0.tar.gz
.
File metadata
- Download URL: modelskill-1.0a0.tar.gz
- Upload date:
- Size: 14.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.11
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b1b2f132b17f3b9450bae9adbd8f1e63fd81bb48455b66f358e9ba93f91b583b |
|
MD5 | 36300d70d8567ac0c4aaf141b897a889 |
|
BLAKE2b-256 | 4ccd538c9a42aae13907b5445597ed4e609004c5f642b82b299589a2a3391d2a |
File details
Details for the file modelskill-1.0a0-py3-none-any.whl
.
File metadata
- Download URL: modelskill-1.0a0-py3-none-any.whl
- Upload date:
- Size: 77.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.11
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2419bb9d6f56aa4f6e36329e97db2319be5ab800793a048c47bf48e5ee8fbef3 |
|
MD5 | 8bdce1063c58ab3e1b0498ca8fbe9f04 |
|
BLAKE2b-256 | 0a1095352f7c2aa414260c44a7f6eaf122a2a4bbb3afcfa1755f48b5c064e665 |