Universal entry point for Docker images containing WSGI apps for the AWS Lambda.
Project description
Universal entry point for Docker images containing Flask apps for the AWS Lambda serverless platform.
It runs the relevant code depending on where it runs.
On the local computer, it runs
the debug server, serving requests to
127.0.0.1
with your app
. You can start it directly (python3 main.py
) or from a
container (docker run ...
) to test the app.
In the AWS Cloud the requests are handled with the same app
, but in a
different way. Lambdarado creates
a handler,
that is compatible with the combination of API Gateway + Lambda Function.
So the Lambdarado puts together:
-
A web application written in Python that is compliant with the WSGI standard. Currently, only Flask is supported
-
A Docker image that contains the app code and dependencies
-
AWS Lambda to run the code contained in the Docker image
-
AWS API Gateway, that broadcasts web requests to and from your Lambda function
Install
$ pip3 install lambdarado
Configure
Dockerfile:
FROM public.ecr.aws/lambda/python:3.8
# ... here should be the code that creates the image ...
ENTRYPOINT ["python", "main.py"]
main.py
import lambdarado import start
def get_app():
# this function must return WSGI app, e.g. Flask
from my_app_module import app
return app
start(get_app)
When starting the Lambda function instance, the get_app
method will run once,
but the main.py
module will be imported twice. Make sure that the app is only created
when get_app
is called, not when main.py
is imported.
In other words, simply running python3 main.py
without calling start
should
NOT do anything heavy and probably should not even declare or import the app
.
Run
Local debug server
Running shell command on development machine:
$ python3 main.py
This will start Werkzeug server listening to http://127.0.0.1:5000.
Local debug server in Docker
Command-line:
$ docker run -p 6000:5000 docker-image-name
This will start Werkzeug server listening to http://0.0.0.0:5000 (inside the docker). The server is accessible as http://127.0.0.1:6000 from the development (host) machine.
Production server on AWS Lambda
After deploying the same image as a Lambda function, it will serve the requests
to the AWS Gateway with your app
.
- You should connect the AWS Gateway to your Lambda function. For the function
to receive all HTTP requests, you may need to redirect the
/{proxy+}
route to the function and makelambda:InvokeFunction
policy less restrictive
Under the hood:
- The awslambdaric will receive requests from and send requests to the Lambda service
- The apig_wsgi will translate requests
received by
awslambdaric
from the AWS Gateway. So your application doesn't have to handle calls from the gateway directly. For the application, requests will look like normal HTTP
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for lambdarado-0.1.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 08f22140c60823954e9604a3699762fe4c8a89852cd0c8ed6cb749847266e44b |
|
MD5 | fb106ac97c82858abc62065cb73ede17 |
|
BLAKE2b-256 | d73f6b989767845e6fb7b7db1cac2df71fed080efbe9dc68e4ddf670411b8899 |