Skip to main content

A package for Adaptive Spatio-Temporal Model (AdaSTEM) in python

Project description

stemflow :bird:

stemflow logo

A package for Adaptive Spatio-Temporal Model (AdaSTEM) in python.

GitHub PyPI version PyPI downloads GitHub last commit

Installation :wrench:

pip install stemflow

Mini Test :test_tube:

To run a auto-mini test, one can simply call:

from stemflow.mini_test import run_mini_test

run_mini_test(delet_tmp_files=True)

Or, if the package were cloned from the github repo, you can run the python script:

git clone https://github.com/chenyangkang/stemflow.git
cd stemflow

pip install -r requirements.txt  # install dependencies

chmod 755 setup.py
python setup.py # installation

chmod 755 mini_test.py
python mini_test.py # run the test

See section Mini Test for further illustration of the mini test.

Brief introduction :information_source:

Stemflow is a toolkit for Adaptive Spatio-Temporal Exploratory Model (AdaSTEM [1,2]) in python. A typical usage is daily abundance estimation using eBird citizen science data. It leverages the "adjacency" information of surrounding target values in space and time, to predict the classes/continues values of target spatial-temporal point. In the demo, we use a two-step hurdle model as "base model", with XGBoostClassifier for occurence modeling and XGBoostRegressor for abundance modeling.

User can define the size of stixel (spatial temporal pixel) in terms of space and time. Larger stixel promotes generalizability but loses precision in fine resolution; Smaller stixel may have better predictability in the exact area but reduced extrapolability for points outside the stixel.

In the demo, we first split the training data using temporal sliding windows with size of 50 day of year (DOY) and step of 20 DOY (temporal_start = 1, temporal_end=366, temporal_step=20, temporal_bin_interval=50). For each temporal slice, a spatial gridding is applied, where we force the stixel to be split into smaller 1/4 pieces if the edge is larger than 25 units (measured in longitude and latitude, grid_len_lon_upper_threshold=25, grid_len_lat_upper_threshold=25), and stop splitting to prevent the edge length to shrink below 5 units (grid_len_lon_lower_threshold=5, grid_len_lat_lower_threshold=5) or containing less than 25 checklists (points_lower_threshold=50). Model fitting is run using 4 cores (njobs=4).

This process is excecuted 10 times (ensemble_fold = 10), each time with random jitter and random rotation of the gridding, generating 10 ensembles. In the prediciton phase, only spatial-temporal points with more than 7 (min_ensemble_required = 7) ensembles usable are predicted (otherwise, set as np.nan).

Usage :star:

from stemflow.model.AdaSTEM import AdaSTEM, AdaSTEMClassifier, AdaSTEMRegressor
from stemflow.model.Hurdle import Hurdle_for_AdaSTEM
from xgboost import XGBClassifier, XGBRegressor

SAVE_DIR = './'

# By using a hurdle model, we first excecute classification test based on presence/absence information, 
# then excecute regression only based on positive samples.

model = Hurdle_for_AdaSTEM(
    classifier=AdaSTEMClassifier(base_model=XGBClassifier(tree_method='hist',random_state=42, verbosity = 0, n_jobs=1),
                                save_gridding_plot = True,
                                ensemble_fold=10, 
                                min_ensemble_required=7,
                                grid_len_lon_upper_threshold=25,
                                grid_len_lon_lower_threshold=5,
                                grid_len_lat_upper_threshold=25,
                                grid_len_lat_lower_threshold=5,
                                points_lower_threshold=50,
                                Spatio1='longitude',
                                Spatio2 = 'latitude', 
                                Temporal1 = 'DOY',
                                use_temporal_to_train=True,
                                njobs=4),
    regressor=AdaSTEMRegressor(base_model=XGBRegressor(tree_method='hist',random_state=42, verbosity = 0, n_jobs=1),
                                save_gridding_plot = True,
                                ensemble_fold=10, 
                                min_ensemble_required=7,
                                grid_len_lon_upper_threshold=25,
                                grid_len_lon_lower_threshold=5,
                                grid_len_lat_upper_threshold=25,
                                grid_len_lat_lower_threshold=5,
                                points_lower_threshold=50,
                                Spatio1='longitude',
                                Spatio2 = 'latitude', 
                                Temporal1 = 'DOY',
                                use_temporal_to_train=True,
                                njobs=4)
)

Fitting and prediction methods follow the style of sklearn estimator class:

## fit
model.fit(X_train.reset_index(drop=True), y_train)

## predict
pred = model.predict(X_test)
pred = np.where(pred<0, 0, pred)
eval_metrics = AdaSTEM.eval_STEM_res('hurdle',y_test, pred_mean)
print(eval_metrics)

Where the pred is the mean of the predicted values across ensembles.

See AdaSTEM demo for further functionality.

Plot QuadTree ensembles :evergreen_tree:

model.classifier.gridding_plot
# or model.regressor.gridding_plot

QuadTree example

Here, each color shows an ensemble generated during model fitting. In each of the 10 ensembles, regions (in terms of space and time) with more training samples were gridded into finer resolution, while the sparse one remained coarse. Prediction results were aggregated across the ensembles (that is, in this example, data were gone though 10 times).


Example of visualization :world_map:

GIF visualization

See section Prediction and Visualization for how to generate this GIF.


Documentation :book:

stemflow Documentation

Contribute to stemflow :purple_heart:

Pull requests are welcomed! Open a issue so that we can discuss the detailed implementation.

Application level cooperation is also welcomed! My domain knowledge is in avian ecology and evolution.

You can contact me at chenyangkang24@outlook.com


References:

  1. Fink, D., Damoulas, T., & Dave, J. (2013, June). Adaptive Spatio-Temporal Exploratory Models: Hemisphere-wide species distributions from massively crowdsourced eBird data. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 27, No. 1, pp. 1284-1290).

  2. Fink, D., Auer, T., Johnston, A., Ruiz‐Gutierrez, V., Hochachka, W. M., & Kelling, S. (2020). Modeling avian full annual cycle distribution and population trends with citizen science data. Ecological Applications, 30(3), e02056.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

stemflow-0.0.13.tar.gz (42.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

stemflow-0.0.13-py3-none-any.whl (35.1 kB view details)

Uploaded Python 3

File details

Details for the file stemflow-0.0.13.tar.gz.

File metadata

  • Download URL: stemflow-0.0.13.tar.gz
  • Upload date:
  • Size: 42.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.7

File hashes

Hashes for stemflow-0.0.13.tar.gz
Algorithm Hash digest
SHA256 c8ec16df7536c72082728fde625dffc3b14f0df9e07e306dc9035ff3de9753fe
MD5 84dca7b82b93a060c9e09521c1cb42e2
BLAKE2b-256 f762c66586a0b7eba942141724a143e135e529e2195581748314c21e2ecd74dc

See more details on using hashes here.

File details

Details for the file stemflow-0.0.13-py3-none-any.whl.

File metadata

  • Download URL: stemflow-0.0.13-py3-none-any.whl
  • Upload date:
  • Size: 35.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.7

File hashes

Hashes for stemflow-0.0.13-py3-none-any.whl
Algorithm Hash digest
SHA256 d8e910ee04d02864b96dd6b9d9872d0111cc65ccd75bb546f497924175f3b358
MD5 acde70cf5824bbef31de51f419a8ad6c
BLAKE2b-256 715f0df202e1e4d3d8f79636a888b6cf44735865f27f03416ad75365b1df87b2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page