Skip to main content

Docker images to power your Python APIs and help you ship faster.

Project description

🚢 inboard 🐳

Docker images to power your Python APIs and help you ship faster.

PyPI GitHub Container Registry Imports: isort Code style: black

builds hooks tests codecov

Brendon Smith (br3ndonland)

Table of Contents

Description

This repo provides Docker images and a PyPI package with useful utilities for Python web servers. It runs Uvicorn with Gunicorn, and can be used to build applications with Starlette and FastAPI. It is inspired by tiangolo/uvicorn-gunicorn-docker, with the following advantages:

  • One repo. The tiangolo/uvicorn-gunicorn images are in at least three separate repos (tiangolo/uvicorn-gunicorn-docker, tiangolo/uvicorn-gunicorn-fastapi-docker, and tiangolo/uvicorn-gunicorn-starlette-docker), with large amounts of code duplication, making maintenance difficult for an already-busy maintainer. This repo combines three into one.
  • One Dockerfile. This repo leverages multi-stage builds to produce multiple Docker images from one Dockerfile.
  • One Python requirements file. This repo uses Poetry with Poetry Extras for dependency management with a single pyproject.toml.
  • One logging configuration. Logging a Uvicorn+Gunicorn+Starlette/FastAPI stack is unnecessarily complicated. Uvicorn and Gunicorn use different logging configurations, and it can be difficult to unify the log streams. In this repo, Uvicorn, Gunicorn, and FastAPI log streams are propagated to the root logger, and handled by the custom root logging config. Developers can also supply their own custom logging configurations.
  • One programming language. Pure Python with no shell scripts.
  • One platform. You're already on GitHub. Why not pull Docker images from GitHub Container Registry?

Instructions

Pull images

Docker images are stored in GitHub Container Registry (GHCR), which is a Docker registry like Docker Hub. Public Docker images can be pulled anonymously from ghcr.io.

# Pull most recent version of each image
docker pull ghcr.io/br3ndonland/inboard:base
docker pull ghcr.io/br3ndonland/inboard:fastapi
docker pull ghcr.io/br3ndonland/inboard:starlette

# Pull image from specific release
docker pull ghcr.io/br3ndonland/inboard:base-0.2.0
docker pull ghcr.io/br3ndonland/inboard:fastapi-0.2.0
docker pull ghcr.io/br3ndonland/inboard:starlette-0.2.0

The FastAPI image is also tagged with latest. Docker uses the latest tag by default, so simply running docker pull ghcr.io/br3ndonland/inboard will pull the FastAPI image.

If authentication to GHCR is needed, follow the instructions below.

Use images in a Dockerfile

For a Poetry project with the following directory structure:

  • repo
    • package
      • main.py
      • prestart.py
    • Dockerfile
    • poetry.lock
    • pyproject.toml

The Dockerfile could look like this:

FROM ghcr.io/br3ndonland/inboard:fastapi

# Install Python requirements
COPY poetry.lock pyproject.toml /app/
WORKDIR /app/
RUN . $POETRY_HOME/env && poetry install --no-dev --no-interaction --no-root

# Install Python app
COPY package /app/package
ENV APP_MODULE=package.main:app
# RUN command already included in base image

Organizing the Dockerfile this way helps leverage the Docker build cache. Files and commands that change most frequently are added last to the Dockerfile. Next time the image is built, Docker will skip any layers that didn't change, speeding up builds.

For a standard pip install:

  • repo
    • package
      • main.py
      • prestart.py
    • Dockerfile
    • requirements.txt
FROM ghcr.io/br3ndonland/inboard:fastapi

# Install Python requirements
COPY requirements.txt /app/
WORKDIR /app/
RUN python -m pip install -r requirements.txt

# Install Python app
COPY package /app/package
ENV APP_MODULE=package.main:app
# RUN command already included in base image

The image could then be built with:

cd /path/to/repo
docker build . -t imagename:latest

The final argument is the Docker image name (imagename in this example). Replace with your image name.

Run containers

Run container:

docker run -d -p 80:80 imagename

Run container with mounted volume and Uvicorn reloading for development:

cd /path/to/repo
docker run -d -p 80:80 \
  -e "LOG_LEVEL=debug" -e "PROCESS_MANAGER=uvicorn" -e "WITH_RELOAD=true" \
  -v $(pwd)/package:/app/package imagename

Details on the docker run command:

  • -e "PROCESS_MANAGER=uvicorn" -e "WITH_RELOAD=true" will instruct start.py to run Uvicorn with reloading and without Gunicorn. The Gunicorn configuration won't apply, but these environment variables will still work as described:
    • APP_MODULE
    • HOST
    • PORT
    • LOG_COLORS
    • LOG_FORMAT
    • LOG_LEVEL
  • -v $(pwd)/package:/app/package: the specified directory (/path/to/repo/package in this example) will be mounted as a volume inside of the container at /app/package. When files in the working directory change, Docker and Uvicorn will sync the files to the running Docker container.

Hit an API endpoint:

docker pull ghcr.io/br3ndonland/inboard:fastapi
docker run -d -p 80:80 ghcr.io/br3ndonland/inboard:fastapi
http :80  # HTTPie: https://httpie.org/
HTTP/1.1 200 OK
content-length: 17
content-type: application/json
date: Wed, 02 Sep 2020 00:31:01 GMT
server: uvicorn

{
    "Hello": "World"
}

Configuration

To set environment variables when starting the Docker image:

docker run -d -p 80:80 -e APP_MODULE="package.custom.module:api" -e WORKERS_PER_CORE="2" myimage

To set environment variables within a Dockerfile:

FROM ghcr.io/br3ndonland/inboard:fastapi
ENV APP_MODULE="package.custom.module:api" WORKERS_PER_CORE="2"

General

  • APP_MODULE: Python module with app instance. Note that the base image sets the environment variable PYTHONPATH=/app, so the module name will be relative to /app unless you supply a custom PYTHONPATH.

    • Default: The appropriate app module from inboard.

    • Custom: For a module at /app/package/custom/module.py and app instance object api, APP_MODULE="package.custom.module:api"

      # /app/package/custom/module.py
      from fastapi import FastAPI
      
      api = FastAPI()
      
      @api.get("/")
      def read_root():
          return {"message": "Hello World!"}
      
  • PRE_START_PATH: Path to a pre-start script. Add a file prestart.py or prestart.sh to the application directory, and copy the directory into the Docker image as described (for a project with the Python application in repo/package, COPY package /app/package). The container will automatically detect and run the prestart script before starting the web server.

    • Default: "/app/inboard/prestart.py" (the default file provided with the Docker image)
    • Custom:
    • PRE_START_PATH="/app/package/custom_script.sh"
    • PRE_START_PATH= (set to an empty value) to disable
  • PYTHONPATH: Python's search path for module files.

    • Default: PYTHONPATH="/app"
    • Custom: PYTHONPATH="/app/custom"

Gunicorn and Uvicorn

  • GUNICORN_CONF: Path to a Gunicorn configuration file. The Gunicorn command-line accepts file paths instead of module paths.
    • Default:
      • "/app/inboard/gunicorn_conf.py" (the default file provided with the Docker image)
    • Custom:
      • GUNICORN_CONF="/app/package/custom_gunicorn_conf.py"
  • Gunicorn worker processes: The number of Gunicorn worker processes to run is determined based on the MAX_WORKERS, WEB_CONCURRENCY, and WORKERS_PER_CORE environment variables, with a default of 1 worker per CPU core and a default minimum of 2. This is the "performance auto-tuning" feature described in tiangolo/uvicorn-gunicorn-docker.
    • MAX_WORKERS: Maximum number of workers to use, independent of number of CPU cores.
      • Default: not set (unlimited)
      • Custom: MAX_WORKERS="24"
    • WEB_CONCURRENCY: Set number of workers independently of number of CPU cores.
      • Default: not set
      • Custom: WEB_CONCURRENCY="4"
    • WORKERS_PER_CORE: Number of Gunicorn workers per CPU core. Overridden if WEB_CONCURRENCY is set.
      • Default: 1
      • Custom:
        • WORKERS_PER_CORE="2": Run 2 worker processes per core (8 worker processes on a server with 4 cores).
        • WORKERS_PER_CORE="0.5" (floating point values permitted): Run 1 worker process for every 2 cores (2 worker processes on a server with 4 cores).
    • Notes:
      • The default number of workers is the number of CPU cores multiplied by the environment variable WORKERS_PER_CORE="1". On a machine with only 1 CPU core, the default minimum number of workers is 2 to avoid poor performance and blocking, as explained in the release notes for tiangolo/uvicorn-gunicorn-docker 0.3.0.
      • If both MAX_WORKERS and WEB_CONCURRENCY are set, the least of the two will be used as the total number of workers.
      • If either MAX_WORKERS or WEB_CONCURRENCY are set to 1, the total number of workers will be 1, overriding the default minimum of 2.
  • PROCESS_MANAGER: Manager for Uvicorn worker processes. As described in the Uvicorn docs, "Uvicorn includes a Gunicorn worker class allowing you to run ASGI applications, with all of Uvicorn's performance benefits, while also giving you Gunicorn's fully-featured process management."
    • Default: "gunicorn" (run Uvicorn with Gunicorn as the process manager)
    • Custom: "uvicorn" (run Uvicorn alone for local development)
  • WORKER_CLASS: Uvicorn worker class for Gunicorn to use.
    • Default: uvicorn.workers.UvicornWorker
    • Custom: For the alternate Uvicorn worker, WORKER_CLASS="uvicorn.workers.UvicornH11Worker" (TODO: the H11 worker is provided for PyPy and hasn't yet been tested)
  • TIMEOUT: Workers silent for more than this many seconds are killed and restarted.
    • Default: "120"
    • Custom: TIMEOUT="20"
  • GRACEFUL_TIMEOUT: Number of seconds to wait for workers to finish serving requests before restart.
    • Default: "120"
    • Custom: GRACEFUL_TIMEOUT="20"
  • KEEP_ALIVE: Number of seconds to wait for requests on a Keep-Alive connection.
    • Default: "5"
    • Custom: KEEP_ALIVE="20"
  • HOST: Host IP address (inside of the container) where Gunicorn will listen for requests.
    • Default: "0.0.0.0"
    • Custom: TODO
  • PORT: Port the container should listen on.
    • Default: "80"
    • Custom: PORT="8080"
  • BIND: The actual host and port passed to Gunicorn.
    • Default: HOST:PORT ("0.0.0.0:80")
    • Custom: BIND="0.0.0.0:8080" (if custom BIND is set, overrides HOST and PORT)
  • GUNICORN_CMD_ARGS: Additional command-line arguments for Gunicorn. Gunicorn looks for the GUNICORN_CMD_ARGS environment variable automatically, and gives these settings precedence over other environment variables and Gunicorn config files.
    • Custom: To use a custom TLS certificate, copy or mount the certificate and private key into the Docker image, and set --keyfile and --certfile to the location of the files.
      docker run -d -p 443:443 \
        -e GUNICORN_CMD_ARGS="--keyfile=/secrets/key.pem --certfile=/secrets/cert.pem" \
        -e PORT=443 myimage
      

Logging

  • LOGGING_CONF: Python module containing a logging configuration dictionary object named LOGGING_CONFIG. Can be either a module path (inboard.logging_conf) or a file path (/app/inboard/logging_conf.py). The LOGGING_CONFIG dictionary will be loaded and passed to logging.config.dictConfig(). See br3ndonland/inboard#3 for more details on logging setup.

    • Default: "inboard.logging_conf" (the default module provided with inboard)

    • Custom: For a logging config module at /app/package/custom_logging.py, LOGGING_CONF="package.custom_logging" or LOGGING_CONF="/app/package/custom_logging.py".

    • If inboard is installed from PyPI with pip install inboard, the logging configuration can be easily extended. For example:

      # /app/package/custom_logging.py
      import os
      from typing import Any, Dict
      
      from inboard import logging_conf
      
      
      LOGGING_CONFIG: Dict[str, Any] = logging_conf.LOGGING_CONFIG
      # only show access logs when running Uvicorn with LOG_LEVEL=debug
      LOGGING_CONFIG["loggers"]["gunicorn.access"] = {"propagate": False}
      LOGGING_CONFIG["loggers"]["uvicorn.access"] = {
          "propagate": str(os.getenv("LOG_LEVEL")) == "debug"
      }
      # don't propagate boto3 logs
      LOGGING_CONFIG["loggers"]["boto3"] = {"propagate": False}
      LOGGING_CONFIG["loggers"]["botocore"] = {"propagate": False}
      LOGGING_CONFIG["loggers"]["s3transfer"] = {"propagate": False}
      
  • LOG_COLORS: Whether or not to color log messages. Currently only supported for LOG_FORMAT="uvicorn".

    • Default:
    • Custom:
      • LOG_COLORS="true"
      • LOG_COLORS="false"
  • LOG_FORMAT: Python logging format.

    • Default:
      • "simple": Simply the log level and message.
    • Custom:
      • "verbose": The most informative format, with the first 80 characters providing metadata, and the remainder supplying the log message.
      • "gunicorn": Gunicorn's default format.
      • "uvicorn": Uvicorn's default format, similar to simple, with support for LOG_COLORS. Note that Uvicorn's access formatter is not supported here, because it frequently throws errors related to ASGI scope.
    # simple
    INFO       Started server process [19012]
    # verbose
    2020-08-19 21:07:31 -0400      19012      uvicorn.error   main            INFO       Started server process [19012]
    # gunicorn
    [2020-08-19 21:07:31 -0400] [19012] [INFO] Started server process [19012]
    # uvicorn (can also be colored)
    INFO:     Started server process [19012]
    
  • LOG_LEVEL: Log level for Gunicorn or Uvicorn.

    • Default: "info"
    • Custom (organized from greatest to least amount of logging):
      • LOG_LEVEL="debug"
      • LOG_LEVEL="info"
      • LOG_LEVEL="warning"
      • LOG_LEVEL="error"
      • LOG_LEVEL="critical"
  • ACCESS_LOG: Access log file to which to write.

    • Default: "-" (stdout, print in Docker logs)
    • Custom:
      • ACCESS_LOG="./path/to/accesslogfile.txt"
      • ACCESS_LOG= (set to an empty value) to disable
  • ERROR_LOG: Error log file to which to write.

    • Default: "-" (stdout, print in Docker logs)
    • Custom:
      • ERROR_LOG="./path/to/errorlogfile.txt"
      • ERROR_LOG= (set to an empty value) to disable

For more information on Python logging configuration, see the Python logging how-to, Python logging cookbook, Python logging module docs, and Python logging.config module docs. Also consider Loguru, an alternative logging module with many improvements over the standard library logging module.

Development

Installation

  • Install Poetry (see the Poetry docs and CONTRIBUTING.md for instructions)
  • Install project with all dependencies: poetry install -E fastapi

Code style

  • Python code is formatted with Black. Configuration for Black is stored in pyproject.toml.

  • Python imports are organized automatically with isort.

    • The isort package organizes imports in three sections:
      1. Standard library
      2. Dependencies
      3. Project
    • Within each of those groups, import statements occur first, then from statements, in alphabetical order.
    • You can run isort from the command line with poetry run isort ..
    • Configuration for isort is stored in pyproject.toml.
  • Other web code (JSON, Markdown, YAML) is formatted with Prettier.

  • Code style is enforced with pre-commit, which runs Git hooks.

    • Configuration is stored in .pre-commit-config.yaml.

    • Pre-commit can run locally before each commit (hence "pre-commit"), or on different Git events like pre-push.

    • Pre-commit is installed in the Poetry environment. To use:

      # after running `poetry install`
      path/to/inboard
      ❯ poetry shell
      
      # install hooks that run before each commit
      path/to/inboard
      .venv  pre-commit install
      
      # and/or install hooks that run before each push
      path/to/inboard
      .venv  pre-commit install --hook-type pre-push
      
    • Pre-commit is also useful as a CI tool. The hooks GitHub Actions workflow runs pre-commit hooks with GitHub Actions.

Testing with pytest

GitHub Actions workflows

GitHub Actions is a continuous integration/continuous deployment (CI/CD) service that runs on GitHub repos. It replaces other services like Travis CI. Actions are grouped into workflows and stored in .github/workflows. See Getting the Gist of GitHub Actions for more info.

Building development images

To build the Docker images for each stage:

git clone git@github.com:br3ndonland/inboard.git

cd inboard

docker build . --rm --target base -t localhost/br3ndonland/inboard:base && \
docker build . --rm --target fastapi -t localhost/br3ndonland/inboard:fastapi && \
docker build . --rm --target starlette -t localhost/br3ndonland/inboard:starlette

Running development containers

# Run Docker container with Uvicorn and reloading
cd inboard

docker run -d -p 80:80 \
  -e "LOG_LEVEL=debug" -e "PROCESS_MANAGER=uvicorn" -e "WITH_RELOAD=true" \
  -v $(pwd)/inboard:/app/inboard localhost/br3ndonland/inboard:base

docker run -d -p 80:80 \
  -e "LOG_LEVEL=debug" -e "PROCESS_MANAGER=uvicorn" -e "WITH_RELOAD=true" \
  -v $(pwd)/inboard:/app/inboard localhost/br3ndonland/inboard:fastapi

docker run -d -p 80:80 \
  -e "LOG_LEVEL=debug" -e "PROCESS_MANAGER=uvicorn" -e "WITH_RELOAD=true" \
  -v $(pwd)/inboard:/app/inboard localhost/br3ndonland/inboard:starlette

# Run Docker container with Gunicorn and Uvicorn
docker run -d -p 80:80 localhost/br3ndonland/inboard:base
docker run -d -p 80:80 localhost/br3ndonland/inboard:fastapi
docker run -d -p 80:80 localhost/br3ndonland/inboard:starlette

# Test HTTP Basic Auth when running the FastAPI or Starlette images:
http :80/status --auth-type=basic --auth=test_username:plunge-germane-tribal-pillar

Change the port numbers to run multiple containers simultaneously (-p 81:80).

Configuring Docker for GitHub Container Registry

If authentication is needed, follow the instructions in the GitHub docs on configuring Docker for use with GHCR. You'll need to create a personal access token (PAT). On GitHub, navigate to Settings -> Developer settings -> Personal access tokens (github.com/settings/tokens), then click "Generate new token." The token should have read:packages scope. You can then copy the token and use it with docker login:

# create PAT in GitHub and copy to clipboard

# transfer PAT from clipboard to file
pbpaste > pat-ghcr.txt

# log in with file
cat pat-ghcr.txt | docker login ghcr.io -u YOUR_GITHUB_USERNAME --password-stdin

If you don't want to store your PAT in plain text, encrypt it with PGP instead. GPG or Keybase can be used for this. Here's how to do it with Keybase:

# create PAT in GitHub and copy to clipboard

# transfer PAT from clipboard to encrypted file
pbpaste | keybase encrypt -o pat-ghcr.asc $YOUR_USERNAME

# decrypt and log in
keybase decrypt -i pat-ghcr.asc | docker login ghcr.io -u YOUR_GITHUB_USERNAME --password-stdin

# can also use keybase pgp encrypt and keybase pgp decrypt, but must export PGP key

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

inboard-0.5.8.tar.gz (24.7 kB view hashes)

Uploaded Source

Built Distribution

inboard-0.5.8-py3-none-any.whl (17.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page