This project automates the fetching and extraction of weather data from multiple sources — such as MSWX, DWD HYRAS, ERA5-Land, NASA-NEX-GDDP, and more — for a given location and time range.
Project description
Welcome to climdata
ClimData — Quickstart & Overview
ClimData provides a unified interface for extracting climate data from multiple providers (MSWX, CMIP, POWER, DWD, HYRAS), computing extreme indices, and converting results to tabular form. The ClimData (or ClimateExtractor) class is central: it manages configuration, extraction, index computation, and common I/O.
Key features
- Provider-agnostic extraction (point / region / shapefile)
- Unit normalization via xclim
- Compute extreme indices using package indices
- Convert xarray Datasets → long-form pandas DataFrames
- Simple workflow runner for chained actions
Installation
- Create and activate a conda environment:
# create
conda create -n climdata python=3.11 -y
# activate
conda activate climdata
- Install via pip (PyPI, if available) or from source:
# from PyPI
pip install climdata
# or from local source (editable)
git clone <repo-url>
cd climdata
pip install -e .
Install optional extras as needed (e.g., xclim, shapely, hydra, dask):
pip install xarray xclim shapely hydra-core dask "pandas>=1.5"
Quick example
from climdata import ClimData # or from climdata.utils.wrapper_workflow import ClimateExtractor
overrides = [
"dataset=mswx",
"lat=52.5",
"lon=13.4",
"time_range.start_date=2014-01-01",
"time_range.end_date=2014-12-31",
"variables=[tasmin,tasmax,pr]",
"data_dir=/path/to/data",
"index=tn10p",
]
# initialize
extractor = ClimData(overrides=overrides)
# extract data (returns xarray.Dataset and updates internal state)
ds = extractor.extract()
# compute index (uses cfg.index)
ds_index = extractor.calc_index(ds)
# convert to long-form dataframe and save
df = extractor.to_dataframe(ds_index)
extractor.to_csv(df, filename="index.csv")
Workflow runner
Use run_workflow for multi-step sequences:
result = extractor.run_workflow(actions=["extract", "calc_index", "to_dataframe", "to_csv"])
WorkflowResult contains produced dataset(s), dataframe(s), and filenames.
Documentation & API
- See API docs under
docs/api/for detailed descriptions of ClimData/ClimateExtractor methods. - Examples and notebooks are under
examples/.
Contributing
- Run tests and lint locally.
- Follow project coding and documentation conventions; submit PRs with tests.
License
Refer to the repository LICENSE file for terms.
⚡️ Tip
-
Make sure
yqis installed:brew install yq # macOS # OR pip install yq
-
To see available variables for a specific dataset (for example
mswx), run:python download_location.py --cfg job | yq '.mappings.mswx.variables | keys'
⚙️ Key Features
- Supports multiple weather data providers
- Uses
xarrayfor robust gridded data extraction - Handles curvilinear and rectilinear grids
- Uses a Google Drive Service Account for secure downloads
- Easily reproducible runs using Hydra
📡 Google Drive API Setup
This project uses the Google Drive API with a Service Account to securely download weather data files from a shared Google Drive folder.
Follow these steps to set it up correctly:
✅ 1. Create a Google Cloud Project
- Go to Google Cloud Console.
- Click “Select Project” → “New Project”.
- Enter a project name (e.g.
WeatherDataDownloader). - Click “Create”.
✅ 2. Enable the Google Drive API
- In the left sidebar, go to APIs & Services → Library.
- Search for “Google Drive API”.
- Click it, then click “Enable”.
✅ 3. Create a Service Account
- Go to IAM & Admin → Service Accounts.
- Click “Create Service Account”.
- Enter a name (e.g.
weather-downloader-sa). - Click “Create and Continue”. You can skip assigning roles for read-only Drive access.
- Click “Done” to finish.
✅ 4. Create and Download a JSON Key
- After creating the Service Account, click on its email address to open its details.
- Go to the “Keys” tab.
- Click “Add Key” → “Create new key” → choose
JSON→ click “Create”. - A
.jsonkey file will download automatically. Store it securely!
✅ 5. Store the JSON Key Securely
- Place the downloaded
.jsonkey in the conf folder with the name service.json.
Setup Instructions from ERA5 api
1. CDS API Key Setup
-
Create a free account on the Copernicus Climate Data Store
-
Once logged in, go to your user profile
-
Click on the "Show API key" button
-
Create the file
~/.cdsapircwith the following content:url: https://cds.climate.copernicus.eu/api/v2 key: <your-api-key-here>
-
Make sure the file has the correct permissions:
chmod 600 ~/.cdsapirc
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file climdata-0.3.11.tar.gz.
File metadata
- Download URL: climdata-0.3.11.tar.gz
- Upload date:
- Size: 1.3 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a3827065d4742f9ca445e2138acaf782750dc7c9a0eb2f3e75b00cd5b6313f5f
|
|
| MD5 |
c9cbadb6cfcff26a141cd65b5d84eb35
|
|
| BLAKE2b-256 |
fdfc25440def4b3381f47cf80d663c2f221ccd617dbc5e7c248ef13b8ce2f6fd
|
File details
Details for the file climdata-0.3.11-py2.py3-none-any.whl.
File metadata
- Download URL: climdata-0.3.11-py2.py3-none-any.whl
- Upload date:
- Size: 55.3 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
088c262cb3d8806495cc07fecec52dc530d2b69aca13ddc777f82d0325ec1b9b
|
|
| MD5 |
46ec7a8bb3c5ffd0c22da9bcb9a0ae46
|
|
| BLAKE2b-256 |
4223e7918b9e6ccc988309e28c6684dbb1f0bc84b9b10bcc589b450c05d58549
|