Skip to main content

Context-managed metrics tracking and output, including but not limited to process/subprocess latencies.

Project description

The goblinfish.metrics.trackers Package

Provides context-manager classes to name, track and report elapsed-time and other, user-defined metrics for top-level process entry-points (like AWS Lambda Function handlers, which is what it was originally conceived for) and sub-processes within them.

Quick Start

Install in your project:

# Install with pip
pip install goblinfish-metrics-trackers
# Install with pipenv
pipenv install goblinfish-metrics-trackers

Import in your code:

from goblinfish.metrics.trackers import ProcessTracker

Create the timing-tracker instance:

tracker = ProcessTracker()

Decorate your top-level/entry-point function:

@tracker
def some_function():
    ...

Add any sub-process timers:

@tracker
def some_function():
    ...

    with tracker.timer('some_process_name'):
        # Do stuff here
        ...

Decorate any child process functions with the instance's .track method:

@tracker
def some_function():
    ...

    with tracker.timer('some_process_name'):
        some_other_function()
        # Do stuff here
        ...

@tracker.track
def some_other_function():
    ...

Set any explicit metrics needed:

@tracker
def some_function():
    ...

    with tracker.timer('some_process_name'):
        try:
            some_other_function()
            # Do stuff here
            ...
        except Exception as error:
            # Count of errors to be aggregated
            tracker.set_metric('some_function_errors', 1)
            # Name of error; simple string values are OK too!
            tracker.set_metric(
                'some_function_error_name', error.__class__.__name__
            )
            # Do stuff here
            ...

@tracker.track
def some_other_function():
    ...

When this code is executed, after the context created by the @tracker decorator is complete, it will print something that looks like this:

{
    "latencies": {
        "some_function": 0.000,
        "some_other_function": 0.000,
        "some_process_name": 0.000
    },
    "metrics": {},
}

Set any explicit identifiers needed:

@tracker
def some_function():
    ...

    with tracker.timer('some_process_name'):
        try:
            some_other_function()
            # Do stuff here
            ...
        except Exception as error:
            # Count of errors to be aggregated
            tracker.set_metric('some_function_errors', 1)
            # Name of error; simple string values are OK too!
            tracker.set_metric(
                'some_function_error_name', error.__class__.__name__
            )
            tracker.set_identifier('correlation_id',
            # Do stuff here
            ...

@tracker.track
def some_other_function():
    ...

When this code is executed, after the context created by the @tracker decorator is complete, it will print something that looks like this:

{
    "latencies": {
        "some_function": 0.018
    },
    "metrics": {},
    "correlation_id": "00000000-0000-0000-0000-000000000001"
}

More detailed examples can be found in the examples directory in the repository.

A top-level ProcessTracker instance is required

This package was designed around the idea of there being a top-level entry-point function and zero-to-many child functions. Applying a @tracker.track decorator to a function that isn't called by the entry-point function decorated with @tracker will yield unexpected result, or no results at all.

Behavior in an asyncio context

This version will work with processes running under asyncio, for example:

with tracker.timer('some_async_process'):
    async.run(some_function())

but it may only capture the time needed for the async tasks/coroutines to be created rather than how long it takes for any of them to execute, depending on the implementation pattern used.

A more useful approach, shown in the li-article-async-example.py module in the examples directory is to encapsulate the async processes in an async function, then wrap all of that function's processes that need to be timed in the context manager. Stripping that function in the example down to a bare minimum simulation, it would look like this:

async def get_person_data():
    sleep_for = random.randrange(2_000, 3_000) / 1000
    with tracker.timer('get_person_data'):
        await asyncio.sleep(sleep_for)
    return {'person_data': ('Professor Plum', dict())}

…which will contribute to the logged/printed output in a more meaningful fashion:

{
    "latencies": {
        "get_person_data": 2215.262,
        "main": 8465.233
    }
}

Contribution guidelines

At this point, contributions are not accepted — I need to finish configuring the repository, deciding on whether I want to set up automated builds for pull-requests, and probably several other items. That said, if you have an idea that you want to propose as an addition, a bug that you want to call out, etc., please feel free to contact the maintainer(s) (see below).

Who do I talk to?

The current maintainer(s) will always be listed in the [maintainers] section of the pyproject.toml file in the repository.

Future plans (To-Dos) and BYOLF

While this package should work nicely for anything that can use a generic JSON log-message format, there are any number of products that are designed to read log-messages and ship them to some other service, usually with their own particular format requirements, in order to provide their own dashboards and alarms. If I have time in the future to start looking into those and writing extras to accommodate, but I'm not confident that I'll have that time.

In the meantime, if there is a need for a specific log-message format, it's possible to BYOLF (Bring Your Own Log Format). Just write your own output function, and provide it as an argument to the ProcessTracker instance that is being created to track process items. What that would entail is:

  • Writing a function that accepts a single str parameter.
  • Deserializing that parameter from the JSON value that it will be passed.
  • Creating the custom log-message output using whatever data is relevant.
  • Writing that log-message in whatever manner is appropriate.

A very bare-bones example:

def my_log_formatter(output: str) -> None:
    ...  # Handle the "output" log-line here as needed.

tracker = ProcessTracker(my_log_formatter)

# ...

Though this package was designed to issue log-messages in a reasonably standard output (print or some logging package functionality), there's no functional reason that it couldn't, for example, write data straight to some database, call some third-party API, or whatever else.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

goblinfish_metrics_trackers-1.0.3.tar.gz (10.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

goblinfish_metrics_trackers-1.0.3-py3-none-any.whl (8.6 kB view details)

Uploaded Python 3

File details

Details for the file goblinfish_metrics_trackers-1.0.3.tar.gz.

File metadata

File hashes

Hashes for goblinfish_metrics_trackers-1.0.3.tar.gz
Algorithm Hash digest
SHA256 dfad17946051e496640d26fb8d7b6d0c3f03757988d7d09e85284c52b4d36d80
MD5 5f4a3a70045d50da7988c42c9b97eb22
BLAKE2b-256 1718c00d0aa99dcc1ce9e12fa9358356b60122f51aba62c6f9d5690ebda2e062

See more details on using hashes here.

File details

Details for the file goblinfish_metrics_trackers-1.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for goblinfish_metrics_trackers-1.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 2c2d517deb55a64288b9490bea8ffc25e6b1d9a32a3bc126989f1a7e45d648b8
MD5 c98028645ee0764018d08cd5400a6abc
BLAKE2b-256 09848b4d3090789e4162bbbc9ad4474539ae11f465f4005cfc7c88e43381ac80

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page