Skip to main content

Unified CLI for reproducible and auditable Agentic SWMM workflows.

Project description

Agentic SWMM Workflow

Agentic SWMM logo with agentic robot, stormwater system, and SWMM wordmark

CI status latest release PyPI version Codecov coverage install options Docker reproducible environment EPA SWMM 5.2 Codex, OpenClaw, or Hermes ready MIT license

Agentic SWMM for reproducible stormwater modeling
Codex, OpenClaw, or Hermes Agent + Skills + MCP + SWMM + verification-first workflow + Obsidian-compatible audit

A five-minute EPA SWMM workflow that is auditable, memory-informed, and agent-ready.

Agentic SWMM Workflow is an open-source, verification-first framework for reproducible stormwater modeling with EPA SWMM. It supports automated execution, QA checks, provenance tracking, calibration support, documentation, and modeling memory, while keeping human modelers in control.

The project is designed to work with agent runtimes such as Codex, OpenClaw, or Hermes. Users can describe a modeling goal in natural language, while SWMM execution remains deterministic, inspectable, and artifact-based.

This is not a simple chat-to-SWMM wrapper. The agent can help coordinate the workflow, but model files, SWMM runs, QA checks, plots, provenance records, audit notes, and modeling memory remain visible as reusable artifacts. Modeling memory can summarize repeated problems and propose skill refinements, but accepted changes still require human review and benchmark verification.

Authors: Zhonghao Zhang & Caterina Valeo
License: MIT

Video: Agentic SWMM workflow: introduction and workflow explanation

Paper: Agentic Modelling Pipeline: Reproducible Rapid Stormwater Modelling Management System with OpenClaw

Why this project exists

Stormwater modelling is rarely one command. A typical SWMM project can involve GIS preprocessing, rainfall formatting, parameter assignment, network assembly, INP construction, model execution, QA checks, plots, calibration, uncertainty analysis, and reporting.

Agentic SWMM provides a middle path: natural-language orchestration with deterministic SWMM execution, explicit provenance, project memory, and verification-first modelling.

The goal is not to replace SWMM or the modeller, but to make SWMM-based modelling easier to rerun, inspect, remember, and trust.

What makes it different

  • Quick onboarding: start from an explicit Docker run, or use local bootstrap scripts after reviewing them.
  • Agent-guided, SWMM-grounded: agents can coordinate tasks, while model execution stays deterministic, inspectable, and CLI-runnable.
  • Modular skill layer: GIS, climate, building, running, plotting, calibration, uncertainty, audit, and orchestration are separated into reusable modules with MCP interfaces where available.
  • Verification-first provenance: build, run, audit, and comparison stages emit traceable artifacts before outputs are treated as evidence.
  • Supervised skill evolution: audited runs can surface recurring workflow patterns and propose updates to existing skills or new skills, while staying coupled to the current skill-driven framework.

Try it in one command

Recommended Docker path:

mkdir -p agentic-swmm-runs && docker run --rm -v "$PWD/agentic-swmm-runs:/app/runs" ghcr.io/zhonghao1995/agentic-swmm-workflow:v0.5.0 acceptance

Docker writes artifacts to agentic-swmm-runs. Local installation is also available for macOS, Linux, and Windows, but review the install script before running it. Details: Installation and CLI guide.

Workflow

Agentic SWMM modeling memory and controlled skill evolution loop

The workflow has three connected layers: execution, modeling memory, and controlled skill evolution. Natural-language requests can trigger reproducible SWMM actions; audited artifacts update human-readable and machine-readable memory; repeated patterns can produce skill-refinement proposals that still require human review and benchmark verification.

What a run can produce

  • generated or supplied SWMM input files such as model.inp
  • SWMM report and binary outputs such as .rpt and .out
  • manifests, command traces, QA summaries, and parsed peak-flow metrics
  • rainfall-runoff figures, calibration summaries, and fuzzy uncertainty summaries
  • audit records: experiment_provenance.json, comparison.json, and experiment_note.md
  • Obsidian-ready modelling notes and modelling-memory summaries

Validation snapshot

The repository includes runnable benchmarks and research previews with different evidence boundaries. The README keeps only the index; figures, commands, and boundary notes live in Validation evidence.

Path What it shows Evidence boundary
Information-loss-guided subcatchment partition QGIS-to-Agentic SWMM preprocessing using entropy and fuzzy-similarity concepts from Zhang & Valeo's Journal of Hydrology paper GIS preprocessing concept, not a calibrated SWMM performance claim
Raw GeoPackage-to-INP benchmark Public TUFLOW GeoPackage layers converted into SWMM-ready artifacts, QA, and audit Structured raw GIS path, not arbitrary CAD/GIS recognition
Prepared-input SWMM benchmark External 40-subcatchment Tecnopolo model execution, plotting, and direct swmm5 comparison Prepared INP validation path
Prior Monte Carlo uncertainty smoke Tecnopolo HORTON parameter perturbation and hydrograph envelope preview Prior uncertainty smoke, not calibration
Optional INP-derived raw adapter benchmark Raw-like inputs extracted from a public SWMM fixture and rebuilt through the modular path Adapter handoff check, not greenfield watershed generation

Examples: TUFLOW and Tecnopolo.

Audit and research memory

The audit layer consolidates artifacts, QA checks, and metric provenance into an Obsidian-compatible experiment note. This example catches a recorded peak-flow value that does not match the value re-parsed from the SWMM report source section.

Experiment audit comparison showing a peak-flow provenance mismatch

The downstream modelling-memory layer can summarize audited run histories into recurring failure patterns, assumptions, missing evidence, QA issues, lessons learned, and controlled proposals for updating existing skills or creating new skills. Because skills drive the workflow, these proposals stay coupled to the current Agentic SWMM framework and still require human review and benchmark verification before acceptance.

More details: Experiment audit framework and Modeling memory and skill evolution.

Codex / OpenClaw / Hermes ready

Codex can serve as the primary local development runtime for this repository: it can inspect the checkout, run scripts, edit skills, generate audit records, update the local Obsidian vault, and review evidence before claims are accepted.

OpenClaw and Hermes remain compatible orchestration targets, especially for MCP-centered agent runs outside the Codex development environment.

For agent-orchestrated runs, preload the Agentic AI memory package and then use the top-level end-to-end skill:

agentic-ai/memory/
skills/swmm-end-to-end/SKILL.md

The top-level skill defines when to use the full modular path, when to use the prepared-input path, which QA gates must pass, and when to stop instead of inventing missing inputs.

For common prepared-input execution, audit, plotting, and memory summarization, the skill should prefer the unified agentic-swmm CLI. MCP tools remain available for the modular stages and for agent runtimes that need fine-grained tool calls.

More details: Codex runtime path, OpenClaw execution path, Skill installation, and MCP runtime integration.

Documentation map

Where collaborators can help

Contributions are welcome in additional SWMM case studies, stronger calibration and validation workflows, DEM / land-use / soil / drainage-asset workflows, new MCP tools, QA testing, tutorials, and interoperability with GIS, ML, and hydrologic toolchains.

Contact:

Citation

GitHub citation metadata is provided in CITATION.cff.

APA repository

Zhang, Z., & Valeo, C. (2026). agentic-swmm-workflow [Computer software]. GitHub. https://github.com/Zhonghao1995/agentic-swmm-workflow

APA manuscript / preprint

Zhang, Z., & Valeo, C. (2026). Agentic Modelling Pipeline: Reproducible Rapid Stormwater Modelling Management System with OpenClaw. https://doi.org/10.31223/X5F47G

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentic_swmm_workflow-0.5.1.tar.gz (305.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentic_swmm_workflow-0.5.1-py3-none-any.whl (350.9 kB view details)

Uploaded Python 3

File details

Details for the file agentic_swmm_workflow-0.5.1.tar.gz.

File metadata

  • Download URL: agentic_swmm_workflow-0.5.1.tar.gz
  • Upload date:
  • Size: 305.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for agentic_swmm_workflow-0.5.1.tar.gz
Algorithm Hash digest
SHA256 e63c5ce22ffcf087f6307fed496a2fe1944ff274c3974ce6b21d523ae233bbf5
MD5 98256f19d494b4705d1c90ab4a931d97
BLAKE2b-256 a6648867800938d65b3277b1001eca7ade5f86898fb5391148f9b8c2cefdf004

See more details on using hashes here.

Provenance

The following attestation bundles were made for agentic_swmm_workflow-0.5.1.tar.gz:

Publisher: publish-pypi.yml on Zhonghao1995/agentic-swmm-workflow

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file agentic_swmm_workflow-0.5.1-py3-none-any.whl.

File metadata

File hashes

Hashes for agentic_swmm_workflow-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1719a6d7a8292c02ba6bbc718fd02ca1871cffe624a936b02f131453ecf863c4
MD5 bbefeb31ce59e7d1c5664d004153ac02
BLAKE2b-256 aff52a5290ff51c520d70897b602758732ac40533ce58a78b1fc41eeee48b8ba

See more details on using hashes here.

Provenance

The following attestation bundles were made for agentic_swmm_workflow-0.5.1-py3-none-any.whl:

Publisher: publish-pypi.yml on Zhonghao1995/agentic-swmm-workflow

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page