Skip to main content

Python utilities for AWS Lambda functions including but not limited to tracing, logging and custom metric

Project description

Lambda Powertools

PackageStatus PythonSupport

A suite of utilities for AWS Lambda Functions that makes tracing with AWS X-Ray, structured logging and creating custom metrics asynchronously easier - Currently available for Python only and compatible with Python >=3.6.

Status: Beta

Features

Tracing

It currently uses AWS X-Ray

  • Decorators that capture cold start as annotation, and response and exceptions as metadata
  • Run functions locally without code change to disable tracing
  • Explicitly disable tracing via env var POWERTOOLS_TRACE_DISABLED="true"

Logging

  • Decorators that capture key fields from Lambda context, cold start and structures logging output as JSON
  • Optionally log Lambda request when instructed (disabled by default)
    • Enable via POWERTOOLS_LOGGER_LOG_EVENT="true" or explicitly via decorator param
  • Logs canonical custom metric line to logs that can be consumed asynchronously
  • Log sampling enables DEBUG log level for a percentage of requests (disabled by default)
    • Enable via POWERTOOLS_LOGGER_SAMPLE_RATE=0.1, ranges from 0 to 1, where 0.1 is 10% and 1 is 100%

Metrics

  • Aggregate up to 100 metrics using a single CloudWatch Embedded Metric Format object (large JSON blob)
  • Context manager to create an one off metric with a different dimension than metrics already aggregated
  • Validate against common metric definitions mistakes (metric unit, values, max dimensions, max metrics, etc)
  • No stack, custom resource, data collection needed — Metrics are created async by CloudWatch EMF

Environment variables used across suite of utilities

Environment variable Description Default Utility
POWERTOOLS_SERVICE_NAME Sets service name used for tracing namespace, metrics dimensions and structured logging "service_undefined" all
POWERTOOLS_TRACE_DISABLED Disables tracing "false" tracing
POWERTOOLS_LOGGER_LOG_EVENT Logs incoming event "false" logging
POWERTOOLS_LOGGER_SAMPLE_RATE Debug log sampling 0 logging
POWERTOOLS_METRICS_NAMESPACE Metrics namespace None metrics
LOG_LEVEL Sets logging level "INFO" logging

Usage

Installation

With pip installed, run: pip install aws-lambda-powertools

Tracing

Example SAM template using supported environment variables

Globals:
  Function:
    Tracing: Active # can also be enabled per function
    Environment:
        Variables:
            POWERTOOLS_SERVICE_NAME: "payment" 
            POWERTOOLS_TRACE_DISABLED: "false" 

Pseudo Python Lambda code

from aws_lambda_powertools.tracing import Tracer
tracer = Tracer()
# tracer = Tracer(service="payment") # can also be explicitly defined

@tracer.capture_method
def collect_payment(charge_id):
  # logic
  ret = requests.post(PAYMENT_ENDPOINT)
  # custom annotation
  tracer.put_annotation("PAYMENT_STATUS", "SUCCESS")
  return ret

@tracer.capture_lambda_handler
def handler(event, context)
  charge_id = event.get('charge_id')
  payment = collect_payment(charge_id)
  ...

Logging

Example SAM template using supported environment variables

Globals:
  Function:
    Environment:
        Variables:
            POWERTOOLS_SERVICE_NAME: "payment" 
            POWERTOOLS_LOGGER_SAMPLE_RATE: 0.1 # enable debug logging for 1% of requests, 0% by default
            LOG_LEVEL: "INFO"

Pseudo Python Lambda code

from aws_lambda_powertools.logging import logger_setup, logger_inject_lambda_context

logger = logger_setup()  
# logger_setup(service="payment") # also accept explicit service name
# logger_setup(level="INFO") # also accept explicit log level

@logger_inject_lambda_context
def handler(event, context)
  logger.info("Collecting payment")
  ...
  # You can log entire objects too
  logger.info({
    "operation": "collect_payment",
    "charge_id": event['charge_id']
  })
  ...

Exerpt output in CloudWatch Logs

{  
   "timestamp":"2019-08-22 18:17:33,774",
   "level":"INFO",
   "location":"collect.handler:1",
   "service":"payment",
   "lambda_function_name":"test",
   "lambda_function_memory_size":"128",
   "lambda_function_arn":"arn:aws:lambda:eu-west-1:12345678910:function:test",
   "lambda_request_id":"52fdfc07-2182-154f-163f-5f0f9a621d72",
   "cold_start": "true",
   "message": "Collecting payment"
}

{  
   "timestamp":"2019-08-22 18:17:33,774",
   "level":"INFO",
   "location":"collect.handler:15",
   "service":"payment",
   "lambda_function_name":"test",
   "lambda_function_memory_size":"128",
   "lambda_function_arn":"arn:aws:lambda:eu-west-1:12345678910:function:test",
   "lambda_request_id":"52fdfc07-2182-154f-163f-5f0f9a621d72",
   "cold_start": "true",
   "message":{  
      "operation":"collect_payment",
      "charge_id": "ch_AZFlk2345C0"
   }
}

Custom Metrics async

NOTE log_metric will be removed once it's GA.

This feature makes use of CloudWatch Embedded Metric Format (EMF) and metrics are created asynchronously by CloudWatch service - You don't need any custom resource or additional CloudFormation stack as before.

CloudWatch requires that every object must have at least one Metric, one Namespace, and one Dimension. Namespace can be automatically added by using POWERTOOLS_METRICS_NAMESPACE environment variable.

Creating multiple metrics

from aws_lambda_powertools.metrics import Metrics, MetricUnit

metrics = Metrics()
metrics.add_namespace(name="ServerlessAirline")
metrics.add_metric(name="ColdStart", unit="Count", value=1)
metrics.add_dimension(name="service", value="booking")

@tracer.capture_lambda_handler
@metrics.log_metrics
def lambda_handler(evt, ctx):
    metrics.add_metric(name="BookingConfirmation", unit="Count", value=1)
    some_code()
    return True

def some_code():
    metrics.add_metric(name="some_other_metric", unit=MetricUnit.Seconds, value=1)
    ...

By default, log_metrics doesn't call the decorated function. If you want to use Metrics middleware only (no Tracer), use call_function parameter to explicitly call your function handler and capture all metrics before printing to logs.

...
@metric.log_metrics(call_function=True)
def lambda_handler(evt, ctx):
    some_code()
    return True

CloudWatch EMF uses the same dimensions across all metrics. If you have metrics that should have different dimensions, you can use single_metric context manager to create a single metric with any dimension you want.

from aws_lambda_powertools.metrics import MetricUnit, single_metric
with single_metric(name="ColdStart", unit=MetricUnit.Count, value=1) as metric:
    metric.add_dimension(name="function_context", value="$LATEST")

Beta

This library may change its API/methods or environment variables as it receives feedback from customers. Currently looking for ideas in the following areas before making it stable:

  • Should Tracer patch all possible imported libraries by default or only AWS SDKs?
    • Patching all libraries may have a small performance penalty (~50ms) at cold start
    • Alternatively, we could patch only AWS SDK if available and to provide a param to patch multiple Tracer(modules=("boto3", "requests"))
  • Create a Tracer provider to support additional tracing
    • Either duck typing or ABC to allow additional tracing providers

Project details


Release history Release notifications | RSS feed

This version

0.6.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aws_lambda_powertools-0.6.0.tar.gz (20.8 kB view hashes)

Uploaded Source

Built Distribution

aws_lambda_powertools-0.6.0-py3-none-any.whl (22.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page