Skip to main content

hydroframe tools and utilities

Project description

hf_hydrodata

The hf_hydrodata Python package is a product of the HydroFrame project and is designed to provide easy access to national hydrologic simulations generated using the National ParFlow model (ParFlow-CONUS1 and ParFlow-CONUS2) as well as a variety of other gridded model input datasets and point observations. Some of the datasets provided here are direct observations (e.g. USGS streamflow observations) while other are model outputs (e.g. ParFlow-CONUS2) or data products (e.g. remote sensing products).

DOI

Installation

The best way to install hf_hydrodata is using pip. This installs our latest stable release with fully-supported features:

pip install hf_hydrodata

Users must create a HydroFrame API account and register their PIN before using the hf_hydrodata package. Please see Creating a HydroFrame API Account for detailed instructions.

Documentation

You can view the full package documentation on Read the Docs. Please see our Python API Reference for detail on each core method.

Usage

You can use hf_hydrodata to get access to both gridded and point observation data from various datasets.

You can view the available datasets and variables from the documentation or you can get the list of dataset and variables from functions.

import hf_hydrodata as hf

datasets = hf.get_datasets()
variables = hf.get_variables({"dataset": "NLDAS2", "grid": "conus1"})

You can get gridded data using the get_gridded_data() function.

import hf_hydrodata as hf

options = {
  "dataset": "NLDAS2", "variable": "precipitation", "period": "hourly",
  "start_time": "2005-10-1", "end_time": "2005-10-2", "grid_bounds": [100, 100, 200, 200]
}
data = hf.get_gridded_data(options)

hf_hydrodata supports access to a collection of site-level data from a variety of sources using the get_point_data() function.

The below syntax will return daily USGS streamflow data from January 1, 2022 through January 5, 2022 for sites that are within the bounding box with latitude bounds of (45, 50) and longitude bounds of (-75, -50).

from hf_hydrodata import get_point_data, get_point_metadata

data_df = get_point_data(
                     dataset = "usgs_nwis",
                     variable = "streamflow",
                     temporal_resolution = "daily",
                     aggregation = "mean",
                     date_start = "2022-01-01", 
                     date_end = "2022-01-05",
                     latitude_range = (45, 50),
                     longitude_range = (-75, -50)
                     )
data_df.head(5)

# Get the metadata about the sites with returned data
metadata_df = get_point_metadata(
                     dataset = "usgs_nwis",
                     variable = "streamflow",
                     temporal_resolution = "daily",
                     aggregation = "mean",
                     date_start = "2022-01-01", 
                     date_end = "2022-01-05",
                     latitude_range = (45, 50),
                     longitude_range = (-75, -50)
                     )
metadata_df.head(5)

Please see the How To section of our documentation for in-depth examples using the point module functions. Additionally, our team has developed the subsettools Python package which uses hf_hydrodata to access data and subsequently run a ParFlow simulation. Please see the subsettools documentation for full walk-through examples of extracting data for a domain and subsequently running a ParFlow simulation.

State of the Field

The hf_hydrodata package spans multiple agencies, and includes both site-level observations and national gridded datasets. This allows users to interact with data from many sources with a single API call. Existing packages such as the dataRetrieval R package provide some similar capabilities allowing users to access a breadth of hydrologic site-level surface water and groundwater observations from the USGS. However, the dataRetreival package is limited to USGS sources and is designed for R users. Our package goes beyond this to provide access to data from multiple agencies (for example the SNOTEL and FluxNet observation networks). The hf_hydrodata package provides a common syntax for acquiring such observations so that the user need not spend valuable research time learning multiple syntaxes to get all data relevant for their watershed. Additionally, the hf_hydrodata package provides users access to a wide selection of gridded data products. Many of these data products are not publicly available by other means including inputs and outputs from the national ParFlow model and multiple gridded atmospheric forcing datasets.

Build Instructions

To build the component you must have a Python virtual environment containing the required components. Install the required components with:

pip install -r requirements.txt

Edit the Python components in src/hf_hydrodata and the unit tests in tests/hf_hydrodata and the data catalog model CSV files in src/hf_hydrodata/model. Use Excel to edit the CSV files so that files are saved in standard CSV format.

Generate the documentation with:

cd docs
make html

This will validate the model CSV files and generate the read-the-docs html into the html folder.

Testing

Our tests are located within the tests/hf_hydrodata directory of this repository. The full test suite is run automatically via Jenkins with each new Pull Request and subsequent commits. Jenkins executes the tests using pytest from the root directory.

License

Copyright © 2024 The Trustees of Princeton University and The Arizona Board of Regents on behalf of The University of Arizona, College of Science Hydrology & Atmospheric Sciences. All rights reserved.

hf_hydrodata was created by William M. Hasling, Laura Condon, Reed Maxwell, George Artavanis, Will Lytle, Amy M. Johnson, Amy C. Defnet. It is licensed under the terms of the MIT license. For details, see the LICENSE file.

Data Use Policy

The software is licenced under MIT licence, but the data is controlled by a Data Use Policy.

Report an Issue

If you have a question about our code or find an issue, please create a GitHub Issue with enough information for us to reproduce what you are seeing.

Contribute

If you would like to contribute to hf_hydrodata, please open a GitHub Issue with a description of your plan to initiate a conversation with our development team. Then detailed implementation review will be done via a Pull Request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hf_hydrodata-1.3.8.tar.gz (56.0 kB view details)

Uploaded Source

Built Distribution

hf_hydrodata-1.3.8-py3-none-any.whl (57.4 kB view details)

Uploaded Python 3

File details

Details for the file hf_hydrodata-1.3.8.tar.gz.

File metadata

  • Download URL: hf_hydrodata-1.3.8.tar.gz
  • Upload date:
  • Size: 56.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.7

File hashes

Hashes for hf_hydrodata-1.3.8.tar.gz
Algorithm Hash digest
SHA256 73ad75cada59c56043e91e691c094c62f989f5563231e2ca7bc17dd3b9048d08
MD5 acbd89778bd66a150c4d478bfe7a5ace
BLAKE2b-256 4b53bd48c31c15d34b7be3423cd4cdc0bbd4bff3baebebdcced379cbd1f6832d

See more details on using hashes here.

File details

Details for the file hf_hydrodata-1.3.8-py3-none-any.whl.

File metadata

File hashes

Hashes for hf_hydrodata-1.3.8-py3-none-any.whl
Algorithm Hash digest
SHA256 aea67b68f5e59f10b4b904fca7e570491967e483565f95d67339e189f5f04ee5
MD5 538d94c8461bfccc1ed222de7896661e
BLAKE2b-256 149386d5a56f1b80878f29f377709d373b5082ab461ea6afa3af6e0d5e1c6586

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page