Retreive historical and recent forecasts from the High-Resolution Rapid Refresh (HRRR) model.
Project description
Brian Blaylock 🌐 Webpage 🔩 PyPI |
---|
Brian's High-Resolution Rapid Refresh code: HRRR-B
HRRR Archive Website | http://hrrr.chpc.utah.edu/ |
---|
HRRR-B, or "Herbie," is a python package for downloading recent and archived High Resolution Rapid Refresh (HRRR) model forecasts and opening HRRR data in an xarray.Dataset. I created most of this during my PhD and decided to organize what I created into a more coherent package. It will continue to evolve at my leisure.
About HRRR-B
HRRR model output is archived by the MesoWest group at the University of Utah on the CHPC Pando Archive System. The GRIB2 files are copied from from NCEP every couple hours. Google also has a growing HRRR archive. Between these three data sources, there is a lot of archived HRRR data available. This Python package helps download those archived HRRR files.
- Download full or partial HRRR GRIB2 files. Partial files are downloaded by GRIB message.
- Three different data sources: NCEP-NOMADS, Pando (University of Utah), and Google Cloud.
-
The Pando HRRR archive is in process of moving to Amazon Web Services as a public Earth dataset. It will be available in zarr format that will allow for more flexibility for chunked data requests. See Issue #2.
-
- Open HRRR data as an xarray.Dataset.
- Other useful tools (in development), like indexing nearest neighbor points and getting a cartopy crs object.
🌹 What's in a name?
How do you pick the right name? For now, I settled on the name HRRR-B, pronounced "Herbie," because this package helps you get data from the High-Resolution Rapid Refresh (HRRR) model, and the B is for Brian. Is it a little pretentious to attach your own name to a package? Maybe the B will stand for something else someday. I'm also thinking about just naming this "Herbie," but that name is already taken on PyPI.
Contributing Guidelines (and disclaimer)
The intent of this package is to serve as an example of how you can download HRRR data from the Pando HRRR archive, but it might be useful for you. Since this package is a work in progress, it is distributed "as is." I do not make any guarantee it will work for you out of the box. Any revisions I make are purely for my benefit. Sorry if I break something, but I usually only push updates to GitHub if the code is in a reasonably functional state (at least, in the way I use it).
With that said, I am happy to share this project with you. You are welcome to open issues and submit pull requests, but know that I may or may not get around to doing anything about it. If this is helpful to you in any way, I'm glad.
🐍 Installation and Conda Environment
Option 1: pip
Install the last published version from PyPI.
pip install hrrrb
Requires: xarray, cfgrib, pandas, cartopy, requests, curl
Optional: matplotlib, cartopy
Option 2: conda
If conda environments are new to you, I suggest you become familiar with managing conda environments.
I have provided a sample Anaconda environment.yml file that lists the minimum packages required plus some extras that might be useful when working with other types of weather data. Look at the bottom lines of that yaml file...there are two ways to install hrrrb
with pip. Comment out the lines you don't want.
For the latest development code:
- pip:
- git+https://github.com/blaylockbk/HRRR_archive_download.git
For the latest published version
- pip:
- hrrrb
First, create the virtual environment with
conda env create -f environment.yml
Then, activate the hrrrb
environment. Don't confuse this environment name with the package name.
conda activate hrrrb
Occasionally, you might want to update all the packages in the environment.
conda env update -f environment.yml
Alternative "Install" Method
There are several other ways to "install" a python package so you can import them. One alternatively is you can
git clone https://github.com/blaylockbk/HRRR_archive_download.git
this repository to any directory. To import the package, you will need to update your PYTHONPATH environment variable to find the directory you put this package or add the linesys.path.append("/path/to/hrrrb")
at the top of your python script.
📝 Jupyter Notebooks
These notebooks show practical use case of the hrrrb
package:
These notebooks offer a deeper discussion on how the download process works. These are not intended for practical use, but should help illustrate my thought process when I created this package.
- Part 1: How to download a bunch of HRRR grib2 files (full file)
- Part 2: How to download a subset of variables from a HRRR file
- Part 3: A function that can download many full files, or subset of files
- Part 4: Opening GRIB2 files in Python with xarray and cfgrib
These are additional notebooks for useful tips/tricks
👨🏻💻 hrrrb.archive
If you are looking for a no-fuss method to download the HRRR data you want, use the hrrrb.archive
module.
import hrrrb.archive as ha
or
from hrrrb.archive import download_hrrr, xhrrr
Main Functions | What it will do for you... |
---|---|
download_hrrr |
Downloads full or partial HRRR GRIB2 files to local disk. |
xhrrr |
Downloads single HRRR file and returns as an xarray.Dataset or list of Datasets. |
👉 Click Here For Some Examples
Function arguments
# Download full GRIB2 files to local disk
download_hrrr(DATES, searchString=None, fxx=range(0, 1),
model='hrrr', field='sfc',
save_dir='./', dryrun=False, verbose=True)
# Download file and open as xarray
xhrrr(DATE, searchString, fxx=0, DATE_is_valid_time=False,
remove_grib2=True, add_crs=True, **download_kwargs):
DATES
Datetime or list of datetimes representing the model initialization time.searchString
See note belowfxx
Range or list of forecast hours.- e.g.,
range(0,19)
for F00-F18 - Default is the model analysis (F00).
- e.g.,
model
The type of model.- Options are
hrrr
,alaska
,hrrrX
- Options are
field
The type of field file.- Options are
sfc
andprs
nat
andsubh
are only available for today and yesterday.
- Options are
save_dir
The directory path the files will be saved in.- Default downloads files into the user's home directory
~/data/hrrr
.
- Default downloads files into the user's home directory
download_source_priority
The default source priority is['pando', 'google', 'nomads']
, but you might want to instead try to download a file from Google before trying to get it from Pando. In that case, set to['google', 'pando', 'nomads']
.dryrun
IfTrue
, the function will tell you what it will download but not actually download anything.verbose
IfTrue
, prints lots of info to the screen.
Specific to xhrrr
:
DATE_is_valid_time
For xhrrr, ifTrue
the input DATE will represent the valid time. IfFalse
, DATE represents the the model run time.remove_grib2
For xhrrr, the grib2 file downloaded will be removed after reading the data into an xarray Dataset.add_crs
For xhrrr, will create a cartopy coordinate reference system object and append it as a Dataset attribute.
The searchString
argument
searchString
is used to specify select variables you want to download. For example, instead of downloading the full GRIB2 file, you could download just the wind or precipitation variables. Read the docstring for the functions or look at notebook #2 for more details.
searchString
uses regular expression to search for GRIB message lines in the files .idx file. There must be a .idx file for the GRIB2 file for the search to work.
For reference, here are some useful examples to give you some ideas...
searchString= |
GRIB fields that will be downloaded |
---|---|
':TMP:2 m' |
Temperature at 2 m |
':TMP:' |
Temperature fields at all levels |
':UGRD:.* mb' |
U Wind at all pressure levels. |
':500 mb:' |
All variables on the 500 mb level |
':APCP:' |
All accumulated precipitation fields |
':APCP:surface:0-[1-9]*' |
Accumulated since initialization time |
':APCP:surface:[1-9]*-[1-9]*' |
Accumulated over last hour |
':UGRD:10 m' |
U wind component at 10 meters |
':(U|V)GRD:' |
U and V wind component at all levels |
':.GRD:' |
(Same as above) |
'(WIND|GUST|UGRD|VGRD):(surface|10 m) |
Surface wind, surface gusts, and 10 m u- v-components |
':(TMP|DPT):' |
Temperature and Dew Point for all levels |
':(TMP|DPT|RH):' |
TMP, DPT, and Relative Humidity for all levels |
':REFC:' |
Composite Reflectivity |
:(APCP|REFC): |
Precipitation and reflectivity |
':surface:' |
All variables at the surface |
'((U|V)GRD:10 m|TMP:2 m|APCP)' |
10-m wind, 2-m temp, and accumulated precipitation. |
Are you working with precipitation fields? (e.g., APCP)
A lot of users have asked why the precipitation accumulation fields are all zero for the model analysis (F00). That is because it is an accumulation variable over a period of time. At the model analysis time, there has been no precipitation because no time has passed.
- F00 only has a 0-0 hour accumulation (always zero)
- F01 only has a 0-1 hour accumulation
- F02 has a 0-2 hour accumulation and a 1-2 hour accumulation.
- F03 has a 0-3 hour accumulation and a 2-3 hour accumulation.
- etc.
NOTE: When cfgrib reads a grib file with more than one accumulated precipitation fields, it will not read all the fields. I think this is an issue with cfgrib (see issue here). The way around this is to key in on a single APCP field. See the
searchString
examples above for keying in on a single APCP field.
Quickly look at GRIB files in the command line
There are two tools for looking at GRIB file contents in the command line.
wgrib2
: can be installed via conda-forge in your environment. A product from NOAA.grib_ls
: is a dependency of cfgrib and is included when you install cfgrib in your environment. A product from ECMWF.
For the sample precipitation data, below is the output using both tools
$ wgrib2 subset_20201214_hrrr.t00z.wrfsfcf12.grib2
1:0:d=2020121400:APCP:surface:0-12 hour acc fcst:
2:887244:d=2020121400:APCP:surface:11-12 hour acc fcst:
$ grib_ls subset_20201214_hrrr.t00z.wrfsfcf12.grib2
subset_20201214_hrrr.t00z.wrfsfcf12.grib2
edition centre date dataType gridType typeOfLevel level stepRange shortName packingType
2 kwbc 20201214 fc lambert surface 0 0-12 tp grid_complex_spatial_differencing
2 kwbc 20201214 fc lambert surface 0 11-12 tp grid_complex_spatial_differencing
2 of 2 messages in subset_20201214_hrrr.t00z.wrfsfcf12.grib2
2 of 2 total messages in 1 files
I hope some of these tips are helpful to you.
Best of luck 🍀
- Brian
🌐 HRRR Archive Website: http://hrrr.chpc.utah.edu/
🚑 Support: GitHub Issues or atmos-mesowest@lists.utah.edu
📧 Brian Blaylock: blaylockbk@gmail.com
✒ Pando HRRR Archive citation details:
Blaylock B., J. Horel and S. Liston, 2017: Cloud Archiving and Data Mining of High Resolution Rapid Refresh Model Output. Computers and Geosciences. 109, 43-50. https://doi.org/10.1016/j.cageo.2017.08.005
Thanks for using HRRR-B
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file hrrrb-0.0.2.tar.gz
.
File metadata
- Download URL: hrrrb-0.0.2.tar.gz
- Upload date:
- Size: 32.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.1 setuptools/49.6.0.post20201009 requests-toolbelt/0.9.1 tqdm/4.54.1 CPython/3.9.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ab33765fe521abfaa54d9508d5db69bc7479052a78f9402b3eb63af4c342ee7e |
|
MD5 | a6522ae392e52994bd22258f81742d0b |
|
BLAKE2b-256 | 77c86195f03b3eb1268f3973f09f995b901de74e07b46a3432258f820d577f50 |
File details
Details for the file hrrrb-0.0.2-py3-none-any.whl
.
File metadata
- Download URL: hrrrb-0.0.2-py3-none-any.whl
- Upload date:
- Size: 28.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.1 setuptools/49.6.0.post20201009 requests-toolbelt/0.9.1 tqdm/4.54.1 CPython/3.9.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d0cef74b1452dc91669b4b6a8035b38a583640ea14cbb1b409473774a3bd3b67 |
|
MD5 | 5e231382924db4c5c13b41f01505aea6 |
|
BLAKE2b-256 | 5e85b88d065f532fbd06e22979fd0a8b431bb41ba11b3bfb5160f75272931b7a |