Skip to main content

Metis log collector for Flask and SQLAlchemy

Project description

Metis Flask SQLAlchemy log collector

About

This library logs the HTTP requests created by Flask and SQLAlchemy with the SQL commands they generate. The library can also collect the execution plan for deeper analysis.

The log records stored in a local file. Future versions will allow saving the log records directly to log collectors such as DataDog, Logz.io and Splunk.

The log can be analyzed using our Visual Studio Code extension

Technical

This library uses OpenTelemetry to instrument both Flask and SQLAlchemy.

Tested on python 3.8.9, Flask 2.1.1, SQLAlchemy 1.4.33, PostgreSQL 12 or higher.

Instrumentation

Installation:

pip install sqlalchemycollector

Instrumentation:

  • Configure the destination file name

  • Configure logging the execution plan

By default the package only logs the SQL commands and the estimated execution plan (PlanCollectType.ESTIMATED).

The library:

  1. Logs the estimated execution plan by running the SQL command using
    EXPLAIN ( VERBOSE, COSTS, SUMMARY, FORMAT JSON).
    
  2. Runs the SQL command.

Logging the estimated plan has an impact on performances but usually, in a dev environment who doesn't run a large number of SQL commands, the impact is very low, around 3%.

Warning! Do NOT run the code in Production! An environment variable should prevent the package from collecting the estimated execution plan in Production.

The library can be configured using PlanCollectType.NONE to log only the SQL Commands while the execution plans, estimated or actual, will be calculated later using our platform.

from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from sqlalchemycollector import collect_logs
from sqlalchemycollector.PlanCollectType import PlanCollectType

# existing app initialization
app = Flask()
db = SQLAlchemy(app)

# optionally, you can pass a log file name, or we will use our default file name 'metis-log-collector.json
optional_log_file_name = 'my-metis-logs.log'

# optionally you can pass a plan collect type, or we will use our default collect type which is NONE
# the Execution Plan is the explanation how the DB Query Optimizer runs the query, focusing on what indexes to use.
# It helps us to understand the bottleneck and predict how the query will perform in the production environment.
optional_plan_collect_type = PlanCollectType.NONE
# class PlanCollectType(Enum):
#     NONE = 0
#     ESTIMATED = 1


# add the following line to start collect logs for metis
collect_logs(app, db.get_engine(), optional_log_file_name, plan_collection_option=optional_plan_collect_type)

Example of a log entry (might be changed in the future)

{
  "logs":
  [
    {
      "_uuid": "0b3f9b86-c620-11ec-9d14-b276246b1dc9",
      "query": "SELECT booking.book.title AS booking_book_title \nFROM booking.book",
      "dbEngine": "postgresql",
      "date": "2022-04-27T11:49:07.161743",
      "plan":
      {
        "Plan":
        {
          "Node Type": "Seq Scan",
          "Parallel Aware": false,
          "Async Capable": false,
          "Relation Name": "book",
          "Schema": "booking",
          "Alias": "book",
          "Startup Cost": 0.0,
          "Total Cost": 1.17,
          "Plan Rows": 17,
          "Plan Width": 14,
          "Output":
          [
            "title"
          ],
          "Shared Hit Blocks": 0,
          "Shared Read Blocks": 0,
          "Shared Dirtied Blocks": 0,
          "Shared Written Blocks": 0,
          "Local Hit Blocks": 0,
          "Local Read Blocks": 0,
          "Local Dirtied Blocks": 0,
          "Local Written Blocks": 0,
          "Temp Read Blocks": 0,
          "Temp Written Blocks": 0
        },
        "Planning":
        {
          "Shared Hit Blocks": 0,
          "Shared Read Blocks": 0,
          "Shared Dirtied Blocks": 0,
          "Shared Written Blocks": 0,
          "Local Hit Blocks": 0,
          "Local Read Blocks": 0,
          "Local Dirtied Blocks": 0,
          "Local Written Blocks": 0,
          "Temp Read Blocks": 0,
          "Temp Written Blocks": 0
        },
        "Planning Time": 0.044
      }
    }
  ],
  "framework": "Flask",
  "path": "/",
  "operationType": "GET",
  "requestDuration": 210.35,
  "requestStatus": 200
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sqlalchemycollector-0.0.13.tar.gz (7.5 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page