Skip to main content

Write sparkline graphs of CPU and memory usage to your logs.

Project description

sparkle_log

Write a spark line graph of CPU, Memory, etc to the python log

❯ sparkle_log
Demo of Sparkle Monitoring system metrics during operations...
INFO     CPU   :   % |                              ▄ | min, mean, max (4, 4, 4)
INFO     Memory:   % |                              ▄ | min, mean, max (46, 46, 46)
Maybe CPU intensive work done here...
INFO     CPU   :   % |                           ▆▁█▄ | min, mean, max (1, 3.2, 5)
INFO     Memory:   % |                           ▄▄▄▄ | min, mean, max (46, 46, 46)
Maybe Memory intensive work done here...
INFO     Memory:   % |                         ▄▄▄▄▄▄ | min, mean, max (46, 46, 46)
INFO     CPU   :   % |                        ▆▁█▄▃▃▁ | min, mean, max (1, 2.6, 5)
INFO     Memory:   % |                        ▄▄▄▄▄▄▄ | min, mean, max (46, 46, 46)

Tracking just one metric at a time looks better.

INFO     Memory:   % |                              ▄ | min, mean, max (46, 46, 46)
INFO     Memory:   % |                           ▄▄▄▄ | min, mean, max (46, 46, 46)
INFO     Memory:   % |                         ▄▄▄▄▄▄ | min, mean, max (46, 46, 46)
INFO     Memory:   % |                        ▄▄▄▄▄▄▄ | min, mean, max (46, 46, 46)

Install

pip install sparkle_log

Usage

This will write up to log entries to your AWS Lambda log, at a frequency you specify, e.g. every 60 seconds. Light-weight, cheap, immediately correlates to your other print statements and log entries.

If logging is less than INFO, then no data is collected.

As a decorator

import sparkle_log
import logging

logging.basicConfig(level=logging.INFO)


@sparkle_log.monitor_metrics_on_call(("cpu", "memory" "drive"), 60)
def handler_name(event, context) -> str:
    return "Hello world!"

As a context manager:

import time
import sparkle_log
import logging

logging.basicConfig(level=logging.INFO)


def handler_name(event, context) -> str:
    with sparkle_log.MetricsLoggingContext(
        metrics=("cpu", "memory", "drive"), interval=5
    ):
        time.sleep(20)
        return "Hello world!"
import time
import logging
import random
from sparkle_log import MetricsLoggingContext

logging.basicConfig(level=logging.INFO)


def dodgy_metric() -> int:
    return random.randint(0, 100)


with MetricsLoggingContext(
    metrics=("dodgy",), interval=1, custom_metrics={"dodgy": dodgy_metric}
):
    print("Monitoring system metrics during operations...")
    time.sleep(20)

Supported Styles

Graph styles currently are all autoscaled. Linear, faces, vertical have only 3 levels. Bar has 8 levels.

from typing import cast
from sparkle_log import sparkline, GraphStyle

for style in ["bar", "jagged", "vertical", "linear", "ascii_art", "pie_chart", "faces"]:
    print(
        f"{style}: {sparkline([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], cast(GraphStyle, style))}"
    )

Results:

bar: ▁▂▃▃▄▅▆▆▇█
jagged: ___--^^¯¯¯
vertical: ___|||‖‖‖‖
linear: ___---¯¯¯¯
ascii_art:  .:-=+*#%@
pie_chart: ○○◔◔◑◑◕◕●●
faces: 😞😞😞😐😐😊😊😁😁😁

Prior art

You could also use container insights or htop. This tool should provide the most value when the server is headless and you only have logging or no easy way to correlate log entries to graphs.

Diagnostics as sparklines

  • memsparkline - CLI tool to show memory as sparkline.
  • densli (defunct?) server stats tool with terminal sparkline display
  • sparcli Context manager for displaying arbitrary metrics as sparklines

Sparkline functions

  • py-sparkblocks function to create sparkline graph
  • sparklines function to create sparkline graph
  • rich-sparklines function that works with rich UI library
  • yasl Yet Another Sparkline Library
  • Piltdown Variety of ASCII/Unicode graphs including sparklines.
  • termgraph - Various terminal graphs not including sparklines, but including bar graphs.
  • lehar - Another sparkline function

CLI tools that display sparklines from arbitrary numbers

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sparkle_log-0.4.0.tar.gz (12.2 kB view details)

Uploaded Source

Built Distribution

sparkle_log-0.4.0-py3-none-any.whl (13.2 kB view details)

Uploaded Python 3

File details

Details for the file sparkle_log-0.4.0.tar.gz.

File metadata

  • Download URL: sparkle_log-0.4.0.tar.gz
  • Upload date:
  • Size: 12.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for sparkle_log-0.4.0.tar.gz
Algorithm Hash digest
SHA256 c987af1c8af45f9480e5651e47d604b990148fee0c6abe4d354f58d3306b26b3
MD5 c2b2cd986dae60ab9b5dfcb3b40b470b
BLAKE2b-256 3d2c736dbf3a5461993b734e8c592345202d8e5dda5483eecd2b9893445a7834

See more details on using hashes here.

File details

Details for the file sparkle_log-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: sparkle_log-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 13.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for sparkle_log-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 13c18f6c34b8302cadfb2cb015f03c7ad8cbb7ab36d1a623cfafd489eda39cf2
MD5 ad92fa06cce06d8290e1f8ee8f5b8b5e
BLAKE2b-256 d3bd3318cbc77e739133857d72a3b1f839403c8dc03504de38242d77267dd525

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page