A package for Adaptive Spatio-Temporal Exploratory Model (AdaSTEM) in python
Project description
stemflow :bird:
A Python Package for Adaptive Spatio-Temporal Exploratory Model (AdaSTEM)
Documentation :book:
Installation :wrench:
pip install stemflow
Or using conda:
conda install -c conda-forge stemflow
Mini Test :test_tube:
To run an auto-mini test, call:
from stemflow.mini_test import run_mini_test
run_mini_test(delet_tmp_files=False, speed_up_times=2)
Or, if the package was cloned from the github repo, you can run the python script:
git clone https://github.com/chenyangkang/stemflow.git
cd stemflow
pip install -r requirements.txt # install dependencies
chmod 755 setup.py
python setup.py # installation
chmod 755 mini_test.py
python mini_test.py # run the test
See section Mini Test for further illustrations of the mini test.
Brief introduction :information_source:
stemflow is a toolkit for Adaptive Spatio-Temporal Exploratory Model (AdaSTEM [1, 2]) in python. A typical usage is daily abundance estimation using eBird citizen science data. It leverages the "adjacency" information of surrounding target values in space and time to predict the classes/continuous values of target spatial-temporal points.
Stemflow is positioned as a user-friendly python package to meet the need of general application of modeling spatio-temporal large datasets. Scikit-learn style object-oriented modeling pipeline enables concise model construction with compact parameterization at the user end, while the rest of the modeling procedures are carried out under the hood. Once the fitting method is called, the model class recursively splits the input training data into smaller spatio-temporal grids (called stixels) using QuadTree algorithm. For each of the stixels, a base model is trained only using data falls into that stixel. Stixels are then aggregated and constitute an ensemble. In the prediction phase, stemflow queries stixels for the input data according to their spatial and temporal index, followed by corresponding base model prediction. Finally, prediction results are aggregated across ensembles to generate robust estimations (see Fink et al., 2013 and stemflow documentation for details).
In the demo, we use a two-step hurdle model as "base model", with XGBoostClassifier for binary occurrence modeling and XGBoostRegressor for abundance modeling. If the task is to predict abundance, there are two ways to leverage the hurdle model. First, hurdle in AdaSTEM: one can use hurdle model in each AdaSTEM (regressor) stixel; Second, AdaSTEM in hurdle: one can use AdaSTEMClassifier as the classifier of the hurdle model, and AdaSTEMRegressor as the regressor of the hurdle model. In the first case, the classifier and regressor "talk" to each other in each separate stixel (hereafter, "hurdle in Ada"); In the second case, the classifiers and regressors form two "unions" separately, and these two unions only "talk" to each other at the final combination, instead of in each stixel (hereafter, "Ada in hurdle"). In Johnston (2015) the first method was used. See section [Hurdle in AdaSTEM or AdaSTEM in hurdle?] for further comparisons.
User can define the size of the stixels (spatial temporal grids) in terms of space and time. Larger stixel promotes generalizability but loses precision in fine resolution; Smaller stixel may have better predictability in the exact area but reduced ability of extrapolation for points outside the stixel.
In the demo, we first split the training data using temporal sliding windows with size of 50 day of year (DOY) and step of 20 DOY (temporal_start = 1
, temporal_end=366
, temporal_step=20
, temporal_bin_interval=50
). For each temporal slice, a spatial gridding is applied, where we force the stixel to be split into smaller 1/4 pieces if the edge is larger than 25 units (measured in longitude and latitude, grid_len_lon_upper_threshold=25
, grid_len_lat_upper_threshold=25
), and stop splitting to prevent the edge length being chunked below 5 units (grid_len_lon_lower_threshold=5
, grid_len_lat_lower_threshold=5
) or containing less than 50 checklists (points_lower_threshold=50
). See section [Optimizing Stixel Size] for discussion about selection gridding parameters. Model fitting is run using 4 cores (njobs=4
).
This process is executed 10 times (ensemble_fold = 10
), each time with random jitter and random rotation of the gridding, generating 10 ensembles. In the prediction phase, only spatial-temporal points with more than 7 (min_ensemble_required = 7
) ensembles usable are predicted (otherwise, set as np.nan
).
Usage :star:
Use Hurdle model as the base model of AdaSTEMRegressor:
from stemflow.model.AdaSTEM import AdaSTEM, AdaSTEMClassifier, AdaSTEMRegressor
from stemflow.model.Hurdle import Hurdle_for_AdaSTEM
from xgboost import XGBClassifier, XGBRegressor
## "hurdle in Ada"
model = AdaSTEMRegressor(
base_model=Hurdle(
classifier=XGBClassifier(tree_method='hist',random_state=42, verbosity = 0, n_jobs=1),
regressor=XGBRegressor(tree_method='hist',random_state=42, verbosity = 0, n_jobs=1)
),
save_gridding_plot = True,
ensemble_fold=10,
min_ensemble_required=7,
grid_len_lon_upper_threshold=25,
grid_len_lon_lower_threshold=5,
grid_len_lat_upper_threshold=25,
grid_len_lat_lower_threshold=5,
temporal_start = 1,
temporal_end =366,
temporal_step=20,
temporal_bin_interval = 50,
points_lower_threshold=50,
Spatio1='longitude',
Spatio2 = 'latitude',
Temporal1 = 'DOY',
use_temporal_to_train=True,
njobs=1
)
Fitting and prediction methods follow the style of sklearn BaseEstimator
class:
## fit
model.fit(X_train.reset_index(drop=True), y_train)
## predict
pred = model.predict(X_test)
pred = np.where(pred<0, 0, pred)
eval_metrics = AdaSTEM.eval_STEM_res('hurdle',y_test, pred_mean)
print(eval_metrics)
Where the pred
is the mean of the predicted values across ensembles.
See AdaSTEM demo for further functionality.
Plot QuadTree ensembles :evergreen_tree:
model.gridding_plot
# Here, the model is a AdaSTEM class, not a hurdle class
Here, each color shows an ensemble generated during model fitting. In each of the 10 ensembles, regions (in terms of space and time) with more training samples were gridded into finer resolution, while the sparse one remained coarse. Prediction results were aggregated across the ensembles (that is, in this example, data were gone through 10 times).
Example of visualization :world_map:
Daily Abundance Map of Barn Swallow
See section Prediction and Visualization for how to generate this GIF.
Contribute to stemflow :purple_heart:
Pull requests are welcomed! Open an issue so that we can discuss the detailed implementation.
Application level cooperation is also welcomed! My domain knowledge is in avian ecology and evolution.
You can contact me at chenyangkang24@outlook.com
References:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for stemflow-1.0.9.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c5c00682f4d45186c01712666f0173c4ad0e5d2f3d208f18743ad11e0848eb05 |
|
MD5 | 013b61748e596e628e2bedd7fc31ea76 |
|
BLAKE2b-256 | 58901c697e9d4fdbccb3d37284fb7ef551cd9ed228b1299117a7979416bf0911 |