Skip to main content

Mark and trace events in your log alike isotopic labeling

Project description

Mark and trace events in your log alike isotopic labeling.

PyPi package PyPi downloads Supported versions of Python License

Build Status Coverage Status Scrutinizer Code Quality

Table of contents


isotopic-logging is a little Python library which may help you to cope with everyday logging overhead.

Often you need to track in time some operation which consists of different actions. Usually you enlog some information about those actions. And usually those actions do not happen close to each other in your code. Instead, they may happen in different functions, modules or even packages. Considering this, you may end up with logs where it’s difficult to distinguish a separate operation. And that’s not good.

Usual solution is to put some message about operation into log entries, e.g. to add some prefix. But here’s a problem: even if your entire operation can be placed into a single function, you still may need to stick a prefix into log messages for multiple times. For example:

# Ugly logging example

def some_operation(some_name):"{0} | start".format(some_name))

    except Exception as e:"{0} | failure: {1}".format(some_name, e))
    else:"{0} | success".format(some_name))
    finally:"{0} | fullstop".format(some_name))

Stop, hey, what’s that smell? Yes, smells like copypasta spirit. And this makes your eyes hurt.

Why this is ugly? Well, first of all, you need to follow same prefix pattern all the time and everywhere in your project (and nobody cares how big is it). It’s a big responsibility and it’s up to you. For example, it may happen you put some extra space between prefix and message (or use another delimiter) and you may miss something while using grep or so.

Secondly, you are doomed to drag that .format() with its arguments behind each log message.

And yes, it will be a pain to change the way prefix is formatted and delimited all over the project.

Moreover, if you decide to have same prefix to be used in nested function calls, then you will need to drag it through them. And this is a farewell to function semantics and reusable code. You will not be able to cache or memoize functions as well, because prefix may be constantly changing. In short, it’s a recipe for a disaster. So, for example:

# Passing auxiliary information through functions is quite ugly too

def some_operation(arg1, arg2):
    prefix = "somehow calculed prefix""{0} | start".format(prefix))

    some_action1(arg1, arg2, prefix)
    some_action2(arg1, arg2, prefix)"{0} | stop".format(prefix))

def some_action1(arg1, arg2, _prefix):"{0} | invoking some action".format(_prefix))

def some_action2(arg1, arg2, _prefix):"{0} | invoking some another action".format(_prefix))

As for me, such approach is absolutely unacceptable.

As an administrator I want to have log entries marked with same prefix within single operation so that I can distinguish and track operations even if log is written from multiple threads or sources.

As a developer I want to store prefix in some context so that I do not need to format it per each call to logger and so that I can access it within nested function calls without passing prefix to a function directly and screwing up its semantics.

So, how that can look like? Personally I propose the following alternatives:

# Better and cleaner way to mark log entries

def some_operation(some_name):
    with prefix_injector(some_name) as inj:"start"))

        except Exception as e:
                "failure. Reason: {e}".format(e=e)))

Here we can see there is no more manual prefix injection. And it’s easier to read.

Second example and be rewritten as following:

# Contactless prefix transmission which does no harm to your functions

def another_operation(arg1, arg2):
    with prefix_injector("somehow calculed prefix") as inj:"start"))

        some_action1(arg1, arg2)
        some_action2(arg1, arg2)"stop"))

def some_action1(arg1, arg2):
    with autoprefix_injector() as inj:"invoking some action"))

def some_action2(arg1, arg2):
    with autoprefix_injector() as inj:"invoking some another action"))

You can notice we do not pass top-level prefix into function calls directly. And this is good. Also, we specify default prefixes for them which is good again. This allows us to reuse those functions. We can even memoize them if we need.

So, with such approach we are able to write less code for log message formatting, we can keep our code modular and cleaner, we can fallback to default prefixes inside reusabe functions. All of this is provided by usage of context managers which store prefixes in a special manner.

Having this, we are able to mark an operation by injection of a special harmless prefix into it and then see in log how operation is executed through the entire project’s ecosystem. This is like isotopic labeling. This is what I call isotopic logging.

Welcome aboard!


Simply get it at Cheese Shop:

pip install isotopic-logging


The main concept of this library is to define a prefix injector once and then to ask it to mark strings in a same manner.

The library provides several default prefix injectors which are described below.

Direct injector

direct_injector adds exactly that prefix to strings which you tell it:

from isotopic_logging import direct_injector

with direct_injector("foo") as inj:

# Output:
# "foobar"
# "foobuz"

Prefix injector

prefix_injector acts same way as direct_injector, but in addition it puts a delimiter between prefix and target message:

from isotopic_logging import prefix_injector

with prefix_injector("foo") as inj:

# Output:
# "foo | bar"
# "foo | buz"

As you see, default delimiter is " | " (space-pipe-space). You can change it to something own:

with prefix_injector("foo", delimiter="::") as inj:

# Output:
# "foo::bar"
# "foo::buz"

Autoprefix injector

If you just want to mark your message somehow and you do not care about how it will be marked, then autoprefix_injector is at your service:

from isotopic_logging import autoprefix_injector

with autoprefix_injector() as inj:

with autoprefix_injector() as inj:

# Output:
# "E340F3 | foo:
# "E340F3 | bar:
# "172077 | buz:

Here we see that autoprefix_injector acts like prefix_injector, but instead of predefined prefix it uses something else. This something else is called operation ID or OID.

You can use your own OID generator by passing it via oid_generator argument (see section Customization -> Custom OID generators).

And, of course, you can set your own delimiter also.

Hybrid injector

If you need to have some static prefix, but you want to make it a bit different per each operation, then you may use hybrid_injector. It acts like autoprefix_injector but allows you to have some static part as in case of simple prefix_injector:

from isotopic_logging import hybrid_injector

with hybrid_injector("foo") as inj:

with hybrid_injector("foo") as inj:

# Output:
# "EC9C6B | foo | bar"
# "EC9C6B | foo | buz"
# "59A8D6 | foo | qux"

hybrid_injector can have custom oid_generator and custom delimiter as for previous injectors.

Nested function calls

This is the sugar part of the library. Imagine, you need to execute a single operation passing through loosely-coupled or quite different pieces of code (functions, objects, modules, etc). Well, this can be easy enough.

Say, you have some independent functions which log what they do:

def add(a, b):
    with autoprefix_injector() as inj:
            "adding {a} to {b}"
            .format(a=a, b=b)))

        result = a + b

            "resulting into {0}".format(result)))

        return result

def multiply(a, b):
    with autoprefix_injector() as inj:
            "making production of {a} and {b}"
            .format(a=a, b=b)))

        result = a * b

            "resulting into {0}".format(result)))

        return result

You will get different prefixes if you run them separately:

add(1, 2)
# Output:
# "2C091F | adding 1 to 2"
# "2C091F | resulting into 3"
# 3

multiply(1, 2)
# Output:
# "A15D88 | making production of 1 and 2"
# "A15D88 | resulting into 2"
# 2

And now let’s define some funtion which aggregates both of that operations:

def add_and_multiply(a, b):
    with autoprefix_injector() as inj:
            "adding and multiplying {a} with {b}..."
            .format(a=a, b=b)))

        result = (add(a, b), multiply(a, b))

            "end result is {0}".format(result)))

        return result

If we call it, we’ll see that all prefixes are inherited from top-level function call:

add_and_multiply(1, 2)

# Output:
# "1543A0 | adding and multiplying 1 with 2..."
# "1543A0 | adding 1 to 2"
# "1543A0 | resulting into 3"
# "1543A0 | making production of 1 and 2"
# "1543A0 | resulting into 2"
# "1543A0 | end result is (3, 2)"
# (3, 2)

Yay! There’s no mess with passing prefixes inside other functions!

This works not only with autoprefix_injector: any type of prefix injectors will work pretty fine.

Prefix transmission

Sometimes you may need to track some operation executed inside different processes. For examaple, you handle some HTTP request and then start some background job (say, via Celery). In this case, you will need to pass your prefix to another process.

Every injector has a prefix attribute, e.g.:

with autoprefix_injector() as inj:

# Output:
# "08C22E | "

So, you are able to throw the prefix to a right place. How can you pick it up? Well, this is the main purpose that direct_injector was created for. Let’s imitate prefix transmission:

from celery import shared_task
from mock import Mock

def add_view(request):
    with autoprefix_injector() as inj:
        username = request.user.username
        x, y =['x'],['y']
            "user '{username}' visits add_view() with x={x} and y={y} as "
            .format(username=username, x=x, y=y)))

        add.async_apply((x, y, inj.prefix))

def add(x, y, _operation_prefix):
    with direct_injector(_operation_prefix) as inj:
        result =  x + y
            "{x} + {y} = {result}"
            .format(x=x, y=y, result=result)))
        return result

request = Mock(
    data=dict(x=1, y=2),

# Log output:
# "C71F3F | user 'abuser' visits add_view() with x=1 and y=2 as arguments"
# "C71F3F | 1 + 2 = 3"


Custom OID generators

By default OID is created by default_oid_generator:

from isotopic_logging.generators import default_oid_generator

for x in range(5):

# Output:
# "9592BC"
# "58B974"
# "9E403B"
# "C180B2"
# "C9FE44"

Default generator is an instance of isotopic_logging.generators.generate_uuid_based_oid with default OID_LENGTH.

So, you can change the length of autogenerated prefix:

from isotopic_logging import autoprefix_injector
from isotopic_logging.generators import generate_uuid_based_oid

generator = generate_uuid_based_oid(12)

with autoprefix_injector(oid_generator=generator) as inj:

# Output:
# "B2092A6EE743 | bar"
# "B2092A6EE743 | buz"

Of course, you can define your own generator:

OID_FORMAT = "{{0:>0{length}}}".format(length=OID_LENGTH)

def generate_oid():
    i = 0
    while True:
        yield OID_FORMAT.format(i)
        i = 0 if i == OID_MAX_VALUE else i + 1

generator = generate_oid()

with autoprefix_injector(oid_generator=generator) as inj:

with autoprefix_injector(oid_generator=generator) as inj:

# Output:
# "00000 | foo"
# "00000 | bar"
# "00001 | buz"

If you do not want to bother yourself with passing generators each time, you can use the power of partials:

from functools import partial

custom_injector = partial(autoprefix_injector, oid_generator=generate_oid())

with custom_injector() as inj:

with custom_injector() as inj:

# Output:
# "00000 | foo"
# "00000 | bar"
# "00001 | buz"


  • 1.0.0 (May 3, 2015)

    Initial version

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

isotopic-logging-1.0.0.tar.gz (13.6 kB view hashes)

Uploaded source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page