Skip to main content

Wrap the lamb...da

Project description

Logging

To start using fleece with a lambda project you will need to make 2 small updates to your project.

  • Where you would normally import logging.get_logger or logging.getLogger usefleece.log.get_logger or fleece.log.getLogger

  • In the file with your primary lambda handler include fleece.log.setup_root_logger()prior to setting up any additional logging.

This should ensure that all handlers on the root logger are cleaned up and one with appropriate stream handlers is in place.

Retry logging calls

A retry wrapper for logging handlers that occasionally fail is also provided. This wrapper can be useful in preventing crashes when logging calls to external services such as CloudWatch fail.

For example, consider the following handler for CloudWatch using watchtower:

logger.addHandler(
    watchtower.CloudWatchLogHandler(log_group='WORKER-POLL',
                                    stream_name=str(uuid.uuid4()),
                                    use_queues=False))

If the CloudWatch service is down, or rate limits the client, that will cause logging calls to raise an exception, which may interrupt the script. To avoid that, the watchtower handler can be wrapped in a RetryHandler as follows:

logger.addHandler(
    fleece.log.RetryHandler(
        watchtower.CloudWatchLogHandler(log_group='WORKER-POLL',
                                        stream_name=str(uuid.uuid4()),
                                        use_queues=False)))

In the above example, logging calls that fail will be retried up to 5 times, using an exponential backoff algorithm to increasingly space out retries. If all retries fail, then the logging call will, by default, give up silently and return, allowing the program to continue. See the documentation for the RetryHandler class for information on how to customize the retry strategy.

boto3 wrappers

This project includes fleece.boto3.client() and fleece.boto3.resource() wrappers that support a friendly format for setting less conservative timeouts than the default 60 seconds used by boto. The following additional arguments are accepted to set these timeouts:

  • connect_timeout: timeout for socket connections in seconds.

  • read_timeout: timeout for socket read operations in seconds.

  • timeout: convenience timeout that sets both of the above to the same value.

Also for convenience, timeouts can be set globally by calling fleece.boto3.set_default_timeout() at startup. Globally set timeouts are then applied to all clients, unless explicitly overriden. Default timeouts set via the set_default_timeout() function apply to all threads, and for that reason it is a good idea to only call this function during start up, before any additional threads are spawn.

As an example, the following code written against the original boto3 package uses the default 60 second socket timeouts:

import boto3
# ...
lambda = boto3.client('lambda')

If you wanted to use 15 second timeouts instead, you can simply switch to the fleece wrappers as follows:

from fleece import boto3
boto3.set_default_timeout(15)
# ...
lambda = boto3.client('lambda')

requests wrappers

This project also includes a wrapper for the requests package. When using fleece.requests, convenient access to set timeouts and retries is provided.

The high-level request functions such as requests.get() and requests.post() accept the following arguments:

  • timeout: a network timeout, or a tuple containing the connection and read timeouts, in seconds. Note that this is functionality that exists in the requests package.

  • retries: a retry mechanism to use with this request. This argument can be of several types: if it is None, then the default retry mechanism installed by the set_default_retries function is used; if it is an integer, it is the number of retries to use; if it is a dictionary, it must have the arguments to a urllib3 Retry instance. Alternatively, this argument can be a Retry instance as well.

The Session class is also wrapped. A session instance from this module also accepts the two arguments above, and passes them on to any requests it issues.

Finally, it is also possible to install global timeout and retry defaults that are used for any requests that don’t specify them explicitly. This enables existing code to take advantage of retries and timeouts after changing the imports to point to this wrapped version of requests. Below is an example that sets global timeouts and retries:

from fleece import requests

# 15 second timeout
requests.set_default_timeout(15)

# 5 retries with exponential backoff, also retry 429 and 503 responses
requests.set_default_retries(total=5, backoff_factor=1,
                             status_forcelist=[429, 503])

# the defaults above apply to any regular requests, no need to make
# changes to existing code.
r = requests.get('https://...')

# a request can override the defaults if desired
r = requests.put('https://...', timeout=25, retries=2)

# sessions are also supported
with requests.Session() as session:
    session.get('https://...')

X-Ray integration

This project also bridges the gap of missing Python support in the AWS X-Ray Lambda integration.

Prerequisites

  1. Make sure you add the following permissions to the Lambda execution role of your function: xray:PutTraceSegments and xray:PutTelemetryRecords.

  2. Enable active tracing under Advanced settings on the Configuration tab of your Lambda function in the AWS Console (or using the `update_function_configuration API call <http://boto3.readthedocs.io/en/latest/reference/services/lambda.html#Lambda.Client.update_function_configuration>`__).

Features

You can mark any function or method for tracing by using the @trace_xray_subsegment decorator. You can apply the decorator to any number of functions and methods, the resulting trace will be properly nested. You have to decorate all the methods you want traced (e.g. if you decorate your handler function only, no other functions will be traced that it calls).

This module also provides wrappers for boto and requests so that any AWS API call, or HTTP request will be automatically traced by X-Ray, but you have to explicitly allow this behavior by calling monkey_patch_botocore_for_xray and/or monkey_patch_requests_for_xray. The best place to do this would be the main handler module where the Lambda entry point is defined.

A quick example (handler.py)

from fleece import boto3
from fleece.xray import (monkey_patch_botocore_for_xray,
                         trace_xray_subsegment)

monkey_patch_botocore_for_xray()


@trace_xray_subsegment()
def lambda_handler(event, context):
    return get_user()


def get_user():
    # This function doesn't have to be decorated, because the API call to IAM
    # will be traced thanks to the monkey-patching.
    iam = boto3.client('iam')
    return iam.get_user()

Note: the monkey-patched tracing will also work with the wrappers described above.

Connexion integration

Summary about what Connexion exactly is (from their project page):

Connexion is a framework on top of Flask that automagically handles HTTP requests based on OpenAPI 2.0 Specification (formerly known as Swagger Spec) of your API described in YAML format. Connexion allows you to write a Swagger specification, then maps the endpoints to your Python functions; this makes it unique, as many tools generate the specification based on your Python code. You can describe your REST API in as much detail as you want; then Connexion guarantees that it will work as you specified.

It’s the perfect glue between your API Gateway API specification and your Lambda function. Fleece makes it very easy to use Connexion:

from fleece.connexion import call_api
from fleece.log import get_logger

logger = get_logger(__name__)


def lambda_handler(event, context):
    return call_api(event, 'myapi', 'swagger.yml', logger)

You just have to make sure that the swagger.yml file is included in the Lambda bundle. For the API Gateway integration, we assume the request template defined by yoke for now.

Using this integration has the added benefit of being able to run your API locally, by adding something like this to your Lambda handler:

from fleece.connexion import get_connexion_app

[...]

if __name__ == '__main__':
    app = get_connexion_app('myapi', 'swagger.yml')
    app.run(8080)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fleece-0.12.0.tar.gz (21.2 kB view details)

Uploaded Source

Built Distribution

fleece-0.12.0-py2.py3-none-any.whl (28.4 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file fleece-0.12.0.tar.gz.

File metadata

  • Download URL: fleece-0.12.0.tar.gz
  • Upload date:
  • Size: 21.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for fleece-0.12.0.tar.gz
Algorithm Hash digest
SHA256 86312e4619903d901bb270e2acf31e81fd2ffcab4175b1d011b9e50a59787eed
MD5 dc391c7197a27721d5fe10cc10962461
BLAKE2b-256 d65d8d5997e0ef585de73289c0684ba57e64ea2e2c766a83640e6fd29d4ef999

See more details on using hashes here.

File details

Details for the file fleece-0.12.0-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for fleece-0.12.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 b6ebe5d46422c051948a4536eb3f281512b63293555ace48b1368472b8e23741
MD5 d13352c947f52a3d88adb73351c5331d
BLAKE2b-256 349b9f2cc600224f209b34a43bb191771c644935917c0fa9497514b3f62e5c70

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page