Skip to main content

Scalable geospatial analysis on Cloud Optimized GeoTIFFs.

Project description

Cog Worker

Scalable geospatial analysis on Cloud Optimized GeoTIFFs.

cog_worker is a simple library to help write scripts to conduct scaleable analysis of gridded data. It's intended to be useful for moderate- to large-scale GIS, remote sensing, and machine learning applications.

Installation

pip install cog_worker

Examples

See docs/examples for Jupyter notebook examples

Quick start

  1. A simple cog_worker script
from rasterio.plot import show
from cog_worker import Manager

def my_analysis(worker):
    arr = worker.read('roads_cog.tif')
    return arr

manager = Manager(proj='wgs84', scale=0.083333)
arr, bbox = manager.preview(my_analysis)
show(arr)
  1. Define an analysis function that recieves a cog_worker.Worker as the first parameter.
from cog_worker import Worker, Manager
import numpy as np

# Define an analysis function to read and process COG data sources
def MyAnalysis(worker: Worker) -> np.ndarray:

    # 1. Read a COG (reprojecting, resampling and clipping as necessary)
    array: np.ndarray = worker.read('roads_cog.tif')

    # 2. Work on the array
    # ...

    # 3. Return (or post to blob storage etc.)
    return array
  1. Run your analysis in different scales and projections
import rasterio as rio

# Run your analysis using a cog_worker.Manager which handles chunking
manager = Manager(
    proj = 'wgs84',       # any pyproj string
    scale = 0.083333,  # in projection units (degrees or meters)
    bounds = (-180, -90, 180, 90),
    buffer = 128          # buffer pixels when chunking analysis
)

# preview analysis
arr, bbox = manager.preview(MyAnalysis, max_size=1024)
rio.plot.show(arr)

# preview analysis chunks
for bbox in manager.chunks(chunksize=1500):
    print(bbox)

# execute analysis chunks sequentially
for arr, bbox in manager.chunk_execute(MyAnalysis, chunksize=1500):
    rio.plot.show(arr)

# generate job execution parameters
for params in manager.chunk_params(chunksize=1500):
    print(params)
  1. Write scale-dependent functions¶
import scipy

def focal_mean(
    worker: Worker,
    kernel_radius: float = 1000 # radius in projection units (meters)
) -> np.ndarray:

    array: np.ndarray = worker.read('sample-geotiff.tif')

    # Access the pixel size at worker.scale
    kernel_size = kernel_radius * 2 / worker.scale
    array = scipy.ndimage.uniform_filter(array, kernel_size)

    return array
  1. Chunk your analysis and run it in a dask cluster
from cog_worker.distributed import DaskManager
from dask.distributed import LocalCluster, Client

# Set up a Manager with that connects to a Dask cluster
cluster = LocalCluster()
client = Client(cluster)
distributed_manager = DaskManager(
    client,
    proj = 'wgs84',
    scale = 0.083333,
    bounds = (-180, -90, 180, 90),
    buffer = 128
)

# Execute in worker pool and save chunks to disk as they complete.
distributed_manager.chunk_save('output.tif', MyAnalysis, chunksize=2048)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cog_worker-0.1.2.tar.gz (13.4 kB view details)

Uploaded Source

File details

Details for the file cog_worker-0.1.2.tar.gz.

File metadata

  • Download URL: cog_worker-0.1.2.tar.gz
  • Upload date:
  • Size: 13.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/57.0.0 requests-toolbelt/0.9.1 tqdm/4.49.0 CPython/3.8.11

File hashes

Hashes for cog_worker-0.1.2.tar.gz
Algorithm Hash digest
SHA256 8dc39fde274811a9e707e63a1b4dc3de748cae9e77e00f7372ad4ec06be3f2cc
MD5 e2868925f38946e5b902659b5ff3eafa
BLAKE2b-256 9b0718e7cb07d8405da9b366f62cc763fe0fc7b13d9d978a68194eb6c67e12ff

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page