Earth Engine + Apache Beam
Project description
GeeBeam
Google Earth Engine + Apache Beam for building geospatial training datasets
Purpose:
GeeBeam is a lightweight library for building and executing Apache Beam pipelines that download data "chips" from Google Earth Engine and write them to TensorFlow records for model training.
The user defines the Earth Engine images they want to download chips from using the Python earthengine-api. geebeam then serialized the graph-definition of the images so they can be passed to the Beam workers.
The pipelines can be run locally or on Google Cloud Dataflow. (Note: currently local jobs are limited to short-running tasks due to grpc "Deadline Exceeded" error).
Install:
pip install geebeam
Examples:
Running locally:
Here we'll create a burned area mask for 2024 using the MCD64A1 product. For example, this could be the target variable for a burn risk model.
import ee
import geebeam
import google
# Get default project id from environment (or specify PROJECT_ID manually)
PROJECT_ID = google.auth.default()[1]
# Initialize ee client, replace with your GCP project ID
ee.Initialize(project=PROJECT_ID)
# Build image for download
burned_2024 = (ee.ImageCollection('MODIS/061/MCD64A1')
.select('BurnDate')
.filter(ee.Filter.calendarRange(2024, 2024, 'year'))
.min()
.gt(0)
.rename(['Burn'])
)
# Building and triggering the pipeline is done with a single command:
geebeam.run_pipeline(
image_list = [burned_2024],
project=PROJECT_ID,
patch_size=128, # Pixel dimensions in each direction
scale=500, # Final export resolution in meters
n_sample=10, # Number of tiles to sample
validation_ratio=0.2, # Fraction to select as validation data
output_path='./test_tf_data/',
sampling_region=ee.Geometry.Rectangle(-63.0, -9.0, -56.0, -4.0)
)
Now let's add another dataset: MapBiomas Amazonia forest fraction
# MB Land-use/land-cover forest fraction
# Note that LULC codes less than 10 area forest in MapBiomas Amazon Collection 6
mb_amz_lulc = (
ee.Image('projects/mapbiomas-public/assets/amazon/lulc/collection6/mapbiomas_collection60_integration_v1')
.lt(10)
.reduceResolution('mean', maxPixels=500)
)
# Exporting both together is as simple as this:
geebeam.run_pipeline(
image_list = [burned_2024, mb_amz_lulc],
project=PROJECT_ID,
patch_size=128,
scale=500,
n_sample=10,
validation_ratio=0.2,
output_path='./test_tf_data/',
sampling_region=ee.Geometry.Rectangle(-63.0, -9.0, -56.0, -4.0),
num_workers=1
)
Scaling up with DataFlow:
The export process can be scaled to many workers via Google Cloud DataFlow. First write a script containing your geebeam.run_pipeline() command. Then execute using the Beam DataFlow runner:
python examples/geebeam_run.py \
--region=us-east1 \
--worker=zone us-east1-b \
--runner=DataflowRunner \
--max_num_workers=8 \
--experiments=use_runner_v2 \
--temp_location=gs://[your-bucket]/[path_to_temp_dir]
--machine_type=n2-highmem-2 \
--sdk_container_image=us-docker.pkg.dev/mmacedo-reservoirid/geebeam-public/geebeam:latest
Note in this case your output_path in run_pipeline() should be a Google Cloud Storage path. If you're running an older version of geebeam, replace "latest" in the sdk_container_image URI with the version number (e.g. v0.1.2). You can also build your own Docker image to run on. More info in the DataFlow docs.
See the Apache Beam and Google Cloud DataFlow docs for full documentation, e.g. pipeline command-line options
Common DataFlow gotchas
-
Before running, you must enable the DataFlow API on Google Cloud Console.
-
If you get an error stating "Subnetwork ''... does not have Private Google Access...", you may need to activate it for your subnetwork (replace us-east1 with your region):
gcloud compute networks subnets update default \
--region=us-east1 \
--enable-private-ip-google-access
- You can test your pipeline script (e.g. geebeam_run.py) and Beam options using the DirectRunner before submitting to DataFlow:
python examples/geebeam_run.py \
--runner=DirectRunner
See DataFlow documentation on specifying network and subnetwork for DataFlow jobs.
- For more common errors, see the Google Cloud DataFlow troubleshooting guide.
Alternatives:
- GeeFlow: Google DeepMind's GeeFlow fulfills a similar purpose. It is more flexible, allowing for more user control of data processing, reprojection, and writing, but slower and no longer actively maintained. With the goal of meeting most users' needs, GeeBeam is designed to be easier and quicker to use, but allows from more limited data transformations.
- Export training data to Google Cloud Storage then download chips from there: This works, but if you need to get data from many different datasets it's slow to export all that data to Cloud Storage and can be expensive to store it there if you don't delete it quickly. This also uses a lot of Earth Engine compute hours, which are now subject to stricter monthly limits.
- Xee: Xee allows for accessing Earth Engine objects as xarray.Datasets. You could use this to define a xarray.Dataset and download "chips" from it, but geebeam interfaces with Beam to automatically parallelize this task and export to Tensorflow records.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file geebeam-0.2.2.tar.gz.
File metadata
- Download URL: geebeam-0.2.2.tar.gz
- Upload date:
- Size: 14.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
aafb1900ccfe02cb391299b1a08b23a6f7a131b0ffd6cab203dfb32bc9578663
|
|
| MD5 |
013a94a09a7905acde8ba28135635b8e
|
|
| BLAKE2b-256 |
1a5c79f2351a18377ee78dc1b23d88129da94e249a89313150ec25c1796586b3
|
Provenance
The following attestation bundles were made for geebeam-0.2.2.tar.gz:
Publisher:
release.yml on kysolvik/geebeam
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
geebeam-0.2.2.tar.gz -
Subject digest:
aafb1900ccfe02cb391299b1a08b23a6f7a131b0ffd6cab203dfb32bc9578663 - Sigstore transparency entry: 1189032796
- Sigstore integration time:
-
Permalink:
kysolvik/geebeam@5325a795b11218da958ca396be84c1fa382b7169 -
Branch / Tag:
refs/tags/v0.2.2 - Owner: https://github.com/kysolvik
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@5325a795b11218da958ca396be84c1fa382b7169 -
Trigger Event:
release
-
Statement type:
File details
Details for the file geebeam-0.2.2-py3-none-any.whl.
File metadata
- Download URL: geebeam-0.2.2-py3-none-any.whl
- Upload date:
- Size: 12.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b942118e4570d5732805a12706daaed4decb57ea988f8c0f8007fc6e72ef0bdd
|
|
| MD5 |
38d12a430c24288fd3c8e7e4ddca9ecf
|
|
| BLAKE2b-256 |
ec1b36d0090f820178497af54a813ef50c9ee4f1ab525a7bf8fb227cd4ddf20a
|
Provenance
The following attestation bundles were made for geebeam-0.2.2-py3-none-any.whl:
Publisher:
release.yml on kysolvik/geebeam
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
geebeam-0.2.2-py3-none-any.whl -
Subject digest:
b942118e4570d5732805a12706daaed4decb57ea988f8c0f8007fc6e72ef0bdd - Sigstore transparency entry: 1189032821
- Sigstore integration time:
-
Permalink:
kysolvik/geebeam@5325a795b11218da958ca396be84c1fa382b7169 -
Branch / Tag:
refs/tags/v0.2.2 - Owner: https://github.com/kysolvik
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@5325a795b11218da958ca396be84c1fa382b7169 -
Trigger Event:
release
-
Statement type: