Skip to main content

Open source remote sensing dataset with benchmarks

Project description

CropHarvest

CropHarvest is an open source remote sensing dataset for agriculture with benchmarks. It collects data from a variety of agricultural land use datasets and remote sensing products.

The dataset consists of 95,186 datapoints, of which 33,205 (35%) have multiclass labels. All other datapoints only have binary crop / non-crop labels.

70,213 (74%) of these labels are paired with remote sensing and climatology data, specifically Sentinel-2, Sentinel-1, the SRTM Digital Elevation Model and ERA 5 climatology data.

21 datasets are aggregated into CropHarvest - these are documented here.

More details about CropHarvest and the benchmarks are available in this paper.

Pipeline

The code in this repository

  1. combines the constituent datasets into a single geoJSON file,
  2. exports the associated satellite data from Earth Engine,
  3. combines both datasets to create (X,y) training tuples and
  4. exposes those tuples via a Dataset object.

The pipeline through which this happens is shown below:

All blue boxes are associated with code in this repository. Anything green is data accessible via Zenodo. By default, the Zenodo data will get downloaded to the data folder - the data folder's Readme has more information about the exact structure of the data.

There are unique cases where you may need to use the EarthEngineExporter directly, these use cases are demonstrated in the demo_exporting_data.ipynb notebook.

Installation

Linux and MacOS users can install the latest version of CropHarvest with the following command:

pip install cropharvest

Windows users must install the CropHarvest within a conda environment to ensure all dependencies are installed correctly:

conda install 'fiona>=1.5' 'rasterio>=1.2.6'
pip install cropharvest

In addition, it may be necessary to install h5py using conda as well (conda install 'h5py>3.7.0') prior to pip-installing cropharvest.

Getting started Open In Colab

See the demo.ipynb notebook for an example on how to download the data from Zenodo and train a random forest against this data.

For more examples of models trained against this dataset, see the benchmarks.

Contributing

If you would like to contribute a dataset, please see the contributing readme.

FAQ Questions asked at least once

How do I use CropHarvest for a specific geography?

All the data is accessible through the cropharvest.datasets.CropHarvest object. The main parameters which you might be interested in manipulating are controllable through a cropharvest.datasets.Task, which takes as input the following parameters:

  • A bounding box, which defines the spatial boundaries of the labels retrieves
  • A target label, which defines the class of the positive labels (if this is left to None, then the positive class will be crops and the negative class will be non-crops)
  • A boolean defining whether or not to balance the crops and non-crops in the negative class
  • A test_identifier string, which tells the dataset whether or not to retrieve a file from the test_features folder and return it as the test data.

So if I wanted to use this to train a model to identify crop vs. non crop in France, I might do it like this:

from sklearn.ensemble import RandomForestClassifier

from cropharvest.datasets import Task, CropHarvest
from cropharvest.countries import get_country_bbox

# folder into which the data will be downloaded / already exists
data_dir = "data"

# get_country_bbox returns a list of bounding boxes.
# the one representing Metropolitan France is the 2nd box
metropolitan_france_bbox = get_country_bbox("France")[1]

task = Task(bounding_box=metropolitan_france_bbox, normalize=True)

my_dataset = CropHarvest(data_dir, task)

X, y = my_dataset.as_array(flatten_x=True)
model = RandomForestClassifier(random_state=0)
model.fit(X, y)
How do I load a specific pixel timeseries?

The specific use case here is to retrieve NDVI values for a specific row in the labels.geojson. Here is how you might go about doing that:

Firstly, I will load the geosjon. I'll do it through a CropHarvestLabels object, which is just a wrapper around the geojson but provides some nice utility functions.

>>> from cropharvest.datasets import CropHarvestLabels
>>>
>>> labels = CropHarvestLabels("cropharvest/data")
>>> labels_geojson = labels.as_geojson()
>>> labels_geojson.head()
  harvest_date planting_date  ... is_test                                           geometry
0         None          None  ...   False  POLYGON ((37.08252 10.71274, 37.08348 10.71291...
1         None          None  ...   False  POLYGON ((37.08721 10.72398, 37.08714 10.72429...
2         None          None  ...   False  POLYGON ((37.08498 10.71371, 37.08481 10.71393...
3         None          None  ...   False  POLYGON ((37.09021 10.71320, 37.09014 10.71341...
4         None          None  ...   False  POLYGON ((37.08307 10.72160, 37.08281 10.72197...

[5 rows x 13 columns]

Now, I can use the labels object to retrieve the filepath of the processed satellite data for each row in the labels geojson:

>>> path_to_file = labels._path_from_row(labels_geojson.iloc[0])

This processed satellite data is stored as h5py files, so I can load it up as follows:

>>> import h5py
>>> h5py_file = h5py.File(path_to_file, "r")
>>> x = h5py_file.get("array")[:]
>>> x.shape
(12, 18)

The shape of x represents 12 timesteps and 18 bands. To retrieve the band I am interested in:

>>> from cropharvest.bands import BANDS
>>> x[:, BANDS.index("NDVI")]
array([0.28992072, 0.28838343, 0.26833579, 0.22577633, 0.27138986,
       0.06584114, 0.498998  , 0.50147203, 0.50437743, 0.44326343,
       0.33735849, 0.28375967])

These are 12 NDVI values, corresponding to the 12 months captured in this timeseries. To find out exactly which month each timestep represents, I can do

>>> labels_geojson.iloc[0].export_end_date
'2021-02-01T00:00:00'

Wich tells me that the last timestep represents January 2021. I can work backwards from there.

What is the data format? The structure of the different data files is now described in depth in the data folder's [Readme](https://github.com/nasaharvest/cropharvest/blob/main/data/README.md)

License

CropHarvest has a Creative Commons Attribution-ShareAlike 4.0 International license.

Citation

If you use CropHarvest in your research, please use the following citation:

@inproceedings{
    tseng2021cropharvest,
    title={CropHarvest: A global dataset for crop-type classification},
    author={Gabriel Tseng and Ivan Zvonkov and Catherine Lilian Nakalembe and Hannah Kerner},
    booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
    year={2021},
    url={https://openreview.net/forum?id=JtjzUXPEaCu}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cropharvest-0.7.0.tar.gz (805.9 kB view details)

Uploaded Source

Built Distribution

cropharvest-0.7.0-py3-none-any.whl (810.0 kB view details)

Uploaded Python 3

File details

Details for the file cropharvest-0.7.0.tar.gz.

File metadata

  • Download URL: cropharvest-0.7.0.tar.gz
  • Upload date:
  • Size: 805.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.19

File hashes

Hashes for cropharvest-0.7.0.tar.gz
Algorithm Hash digest
SHA256 35b0f2614cc00fad555beb01ecfca9bc2ffdcaecfe40bfbd04bda3748fd0a62b
MD5 29a197599e807ddd873aa06fd16bc431
BLAKE2b-256 2c4fe4bb9e3bec0e52070b7270c310a18441372945b744c9070cd446cf0774a1

See more details on using hashes here.

File details

Details for the file cropharvest-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: cropharvest-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 810.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.19

File hashes

Hashes for cropharvest-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2ea12a9d116079a89de70d74ec92328fe4bf6cbb3999bad05cb4b6ce8a711697
MD5 dcfb1637f26dee7d9eb31e727fb17586
BLAKE2b-256 6f55870cbfc8ee744c525d9844647fa8da5ac067e3905cdf11c1a32057d7a975

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page