Skip to main content

A library to capture job outputs to persistent storage

Project description

CaptureJob

This library provides the ability to copy stdout and stderr files to cloud storage.

Usage

Programmatically it is used like this:

from capturejob import CaptureJob

CaptureJob()

but mainly its intended to be used in a dockerfile command script followng execution of a command script.

echo "Running batch"

cd /work && poetry run python src/etl_noop/batchrun.py

TASK_ID="...
JOB_DATE="..."
CAPTURE_CONNECTION_STRING="..." 
CAPTURE_CONTAINER_NAME="..."
poetry run python -m capturejob 

echo "Done"

Configuration

The following environment variables need to be set

  • TASK_ID: A name of the job which is used in the storage folder name created
  • JOB_DATE: The date of the job which is used in the storage folder name created
  • CAPTURE_CONNECTION_STRING: Azure connection string
  • CAPTURE_CONTAINER_NAME: Azure container name

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

capturejob-0.3.1.tar.gz (2.8 kB view hashes)

Uploaded Source

Built Distribution

capturejob-0.3.1-py3-none-any.whl (3.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page