No project description provided
Project description
PipeLogger Library
What does the Pipelogger library do?
Pipelogger is a library created to establish a standard format for execution logs in pipelines mainly related to data ingestion. The standard form is this:
{
"PipelineLogs": {
"PipelineID": "Pipeline-Example",
"Timestamp": "MM-DD-YY-THH:MM:SS",
"Status": "Success",
"Message": "Data uploaded successfully",
"ExecutionTime": 20.5075738430023
},
"BigQueryLogs": [
{
"BigQueryID": "project.pipeline-example.table_1",
"Size": 1555
},
{
"BigQueryID": "project.pipeline-example.table_2",
"Size": 3596
}
],
"Details": [
{
"additional_info": [
"Data downloaded successfully",
"Data processed successfully",
"Data uploaded successfully"
]
}
]
}
What should you consider when implementing the PipeLogger in your pipeline?
- The pipeline must be deployed in GCP, either as Cloud Function or Cloud Run.
- The pipeline must feed Big Query tables.
- The pipeline needs a bucket to store the logs.
How to implement it in any Pipeline?
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
pipelogger-1.0.0.tar.gz
(4.0 kB
view hashes)
Built Distribution
Close
Hashes for pipelogger-1.0.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8a56d94b9047d79a6a50cd3afd0aa0d2156ba44e3f043fd44259310dfb3651e9 |
|
MD5 | cfce1e7c2f72f8f280ae03083f870615 |
|
BLAKE2b-256 | c0dd3d9d58971aefc778c9c125cc50329b701ffe248c191613a46c9c8eb71740 |