Load U.S. CRN data.
Project description
uscrn
Easily load U.S. Climate Reference Network data.
With uscrn
, fetching and loading years of data for all CRN sites[^a] takes just one line of code[^b].
Example:
import uscrn as crn
df = crn.get_data(2019, "hourly", n_jobs=6) # pandas.DataFrame
ds = crn.to_xarray(df) # xarray.Dataset, with soil depth dimension if applicable (hourly, daily)
Both df
(pandas) and ds
(xarray) include dataset and variable metadata.
For df
, these are in df.attrs
and can be preserved by
writing to Parquet with the PyArrow engine with pandas v2.1+.
df.to_parquet("crn_2019_hourly.parquet", engine="pyarrow")
Conda install example[^c]:
conda create -n crn -c conda-forge python=3.10 joblib numpy pandas pyyaml requests xarray pyarrow netcdf4
conda activate crn
pip install --no-deps uscrn
[^a]: Use uscrn.load_meta()
to load the site metadata table.
[^b]: Not counting the import
statement...
[^c]: uscrn
is not yet on conda-forge.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
uscrn-0.1.0b3.tar.gz
(15.0 kB
view hashes)
Built Distribution
uscrn-0.1.0b3-py3-none-any.whl
(14.8 kB
view hashes)