More Logs! Structured Logging and Exception Handling
Project description
More Logs - Structured Logging and Exception Handling
This library provides two main features
- Structured logging - output is all JSON (with options to serialize to text for humans)
- Exception handling weaved in - Good logs must represent what happened, and that can only be done if the logging library is intimately familiar with the (exceptional) code paths taken.
Motivation
Exception handling and logging are undeniably linked. There are many instances where exceptions are raised and must be logged, and others where the subsuming system can fully handle the exception, and no log should be emitted. Exception handling semantics are great because they decouple the cause from the solution, but this can be at odds with clean logging - which couples raising and catching to make appropriate decisions about what to emit to the log.
This logging module is additionally responsible for raising exceptions, collecting the trace and severity, and then deducing if it must be logged, or if it can be ignored because something can handle it.
Basic Usage
Use logger.info()
for all logging
from mo_logs import logger
logger.info("Hello, World!")
There is no need to create logger objects. The Log
module will keep track of
what, where and when of every call.
Importing
If you have existing logger code, you may change your import statement, replace import logging
with import mo_logs as logging
Since mo-logs
tracks calling context, you may simply import the smae logger everywhere
from mo_logs import logger
Using named parameters
All logging calls accept a string template with named parameters. Keyword arguments can be added to the call to provide values. The template and arguments are not combined at call time, rather they are held in a JSON-izable data structure for structured logging. The template is only expanded if the log is serialized for humans.
logger.info("Hello, {{name}}!", name="World!")
Do not use Python's string formatting features:
Using any of these will expand the string template at call time, which is a parsing nightmare for log analysis tools.
Parametric parameters
All the Log
functions accept a default_params
as a second parameter, like so:
logger.info("Hello, {{name}}!", {"name": "World!"})
this is meant for the situation your code already has a bundled structure you wish to use as a source of parameters. If keyword parameters are used, they will override the default values. Be careful when sending whole data structures, they will be logged!
Please, never use locals()
def worker(value):
name = "tout le monde!"
password = "123"
logger.info("Hello, {{name}}", locals()) # DO NOT DO THIS!
Despite the fact using locals()
is a wonderful shortcut for logging it is
dangerous because it also picks up sensitive local variables. Even if
{{name}}
is the only value in the template, the whole locals()
dict will
be sent to the structured loggers for recording.
Formatting parameters
There are a variety of formatters, and they can be applied by using the
pipe (|
) symbol.
In this example we cast the name
to uppercase
logger.info("Hello, {{name|upper}}!", name="World!")
Some formatters accept arguments:
logger.info("pi is {{pi|round(places=3)}}!", pi=3.14159265)
You can look at the strings
module to see the formatters available.
Destination: Database!
All logs are structured logs; the parameters will be included, unchanged, in the log structure. This library also expects all parameter values to be JSON- serializable so they can be stored/processed by downstream JSON tools.
Example structured log
{
"template": "Hello, {{name}}!",
"params": {"name": "World!"},
"severity": "NOTE",
"format": "{{machine.name}} (pid {{machine.pid}}) - {{timestamp|datetime}} - {{thread.name}} - \"{{location.file}}:{{location.line}}\" - ({{location.method}}) - Hello, {{params.name}}!",
"location": {
"file": "/home/kyle/code/example.py",
"line": 10,
"method": "worker"
},
"machine": {
"name": "klahnakoski-39477",
"os": "Windows10",
"pid": 18060,
"python": "CPython"
},
"thread": {
"id": 14352,
"name": "Main Thread"
},
"timestamp": 1578673471
}
Exception Handling
Instead of raise
use logger.error()
logger.error("This will throw an error")
The actual call will always raise an exception, and it manipulates the stack
trace to ensure the caller is appropriately blamed. Feel free to use the
raise
keyword (as in raise logger.error("")
), if that looks nicer to you.
Always chain your exceptions
The cause
parameter accepts an Exception
, or a list of exceptions.
Chaining is generally good practice that helps you find the root cause of
a failure.
try:
# Do something that might raise exception
except Exception as cause:
logger.error("Describe what you were trying to do", cause=cause)
Use raise from
?
Python3 attaches the full stacktrace to exceptions and allows chaining with raise from
. We can replace
from mo_logs import logger
logger.error("description", cause=cause)
with
from mo_logs.exceptions import ERROR, Except
raise Except(ERROR, "description") from cause
which is a bit more clunky, especially when passing dynamic parameters. Plus it breaks the Log.<type>()
calling pattern; switching between an error
and a warning
is more than a name change.
Use named parameters in your error descriptions too
Error logging accepts keyword parameters just like logger.info()
does
def worker(value):
try:
logger.info("Start working with {{key1}}", key1=value1)
# Do something that might raise exception
except Exception as cause:
logger.error("Failure to work with {{key2}}", key2=value2, cause=cause)
No need to formally type your exceptions
An exception can be uniquely identified by the message template it is given; exceptions raised with the same template are the same type. You should have no need to create new exception types.
Testing for exception "types"
This library advocates chaining exceptions early and often, and this hides
important exception types in a long causal chain. mo-logs
allows you to easily
test if a type (or string, or template) can be found in the causal chain by using
the in
keyword:
def worker(value):
try:
# Do something that might raise exception
except Exception as cause:
if "Failure to work with {{key2}}" in cause:
# Deal with exception thrown in above code, no matter
# how many other exception handlers were in the chain
For those who may abhor the use of magic strings, feel free to use constants instead:
KEY_ERROR = "Failure to work with {{key}}"
try:
logger.error(KEY_ERROR, key=42)
except Exception as cause:
if KEY_ERROR in cause:
logger.info("dealt with key error")
If you can deal with an exception, then it will never be logged
When a caller catches an exception from a callee, it is the caller's responsibility to handle that exception, or re-raise it. There are many situations a caller can be expected to handle exceptions; and in those cases logging an error would be deceptive.
def worker(value):
try:
logger.error("Failure to work with {{key3}}", key3=value3)
except Exception as cause:
# Try something else
Use logger.warning()
if your code can deal with an exception, but you still want to log it as an issue
def worker(value):
try:
logger.info("Start working with {{key4}}", key4=value4)
# Do something that might raise exception
except Exception as cause:
logger.warning("Failure to work with {{key4}}", key4=value4, cause=cause)
Don't loose your stack trace!
Be aware your except
clause can also throw exceptions: In the event you
catch a vanilla Python Exception, you run the risk of loosing its stack trace.
To prevent this, wrap your exception in an Except
object, which will capture
your trace for later use. Exceptions thrown from this Log
library need not
be wrapped because they already captured their trace. If you wrap an Except
object, you simply get back the object you passed.
try:
# DO SOME WORK
except Exception as cause:
cause = Except.wrap(cause)
# DO SOME FANCY ERROR RECOVERY
Always catch all Exceptions
Catching all exceptions is preferred over the only-catch-what-you-can-handle
strategy. First, exceptions are not lost because we are chaining. Second,
we catch unexpected Exceptions
early and we annotate them with a
description of what the local code was intending to do. This annotation
effectively groups the possible errors (known, or not) into a class, which
can be used by callers to decide on appropriate mitigation.
To repeat: When using dependency injection, callers can not reasonably be expected to know about the types of failures that can happen deep down the call chain. This makes it vitally important that methods summarize all exceptions, both known and unknown, so their callers have the information to make better decisions on appropriate action.
For example: An abstract document container, implemented on top of a SQL database, should not emit SQLExceptions of any kind: A caller that uses a document container should not need to know how to handle SQLExceptions (or any other implementation-specific exceptions). Rather, in this example, the caller should be told it "can not add a document", or "can not remove a document". This allows the caller to make reasonable decisions when they do occur. The original cause (the SQLException) is in the causal chain.
Another example, involves nested exceptions: If you catch a particular type of exception, you may inadvertently catch the same type of exception from deeper in the call chain. Narrow exception handling is an illusion. Broad exception handling will force you to consider a variety of failures early; force you to consider what it means when a block of code fails; and force you to describe it for others.
Don't make methods you do not need
There is an argument that suggests you should break your code into logical methods, rather than catching exceptions: The method name will describe action that failed, and the stack trace can be inspected to make mitigation decisions. Additional methods is a poor solution:
- More methods means more complexity; the programmer must find the method, remember the method, and wonder if the method is used elsewhere.
- Methods can be removed while refactoring; exceptions make it clear the error is important
- Compiler optimizations can interfere with the call stack
- The method name is not an appropriate description of the problem: Many words may be required for clarity.
- Code that inspects its own stack trace is messy code.
- A stack trace does not include runtime values that are vital for describing the problem.
Log 'Levels'
The mo-logs
module has no concept of logging "levels". It's expected you use debug
variables: Variables prefixed with DEBUG_
are used to control the logging
output.
# simple.py
DEBUG_SHOW_DETAIL = True
def worker():
if DEBUG_SHOW_DETAIL:
logger.info("Starting")
# DO WORK HERE
if DEBUG_SHOW_DETAIL:
logger.info("Done")
def main():
try:
settings = startup.read_settings()
logger.start(settings.debug)
# DO WORK HERE
except Exception as cause:
logger.error("Complain, or not", cause)
finally:
logger.stop()
This pattern of using explict debug variables allows the programmer to switch logging on and off on individual subsystems that share that variable: Either multiple debug variables in a single module, or multiple modules sharing a single debug variable.
These debug variables can be switched on/off by configuration file:
// settings.json
{
"debug":{
"constants":{"simple.DEBUG_SHOW_DETAILS":false}
}
}
To keep logging to a single line, you may consider this pattern:
DEBUG and logger.info("error: {{value}}", value=expensive_function())
Notice the expensive_function()
is not run when DEBUG
is false.
Log Configuration and Setup
The mo-logs
library will log to the console by default. logger.start(settings)
will redirect the logging to other streams, as defined by the settings:
- logs - List of all log-streams and their parameters
- trace - Show more details in every log line (default False)
- cprofile - Used to enable the builtin python c-profiler, ensuring the cprofiler is turned on for all spawned threads. (default False)
- constants - Map absolute path of module constants to the values that will be assigned. Used mostly to set debugging constants in modules.
Of course, logging should be the first thing to be setup (aside from digesting settings of course). For this reason, applications should have the following structure:
from mo_logs import logger
def main():
try:
settings = startup.read_settings()
logger.start(settings.debug)
# DO WORK HERE
except Exception as cause:
logger.error("Complain, or not", cause)
finally:
logger.stop()
or more simply
from mo_logs import LoggingContext
def main():
with LoggingContext():
# DO WORK HERE
Example configuration file
{"debug":{
"trace":true,
"log":[
{
"class": "logging.handlers.RotatingFileHandler",
"filename": "examples/logs/examples_etl.log",
"maxBytes": 10000000,
"backupCount": 100,
"encoding": "utf8"
},
{
"class": "graypy.GELFUDPHandler",
"host": "localhost",
"port": 12201
},
{
"log_type": "email",
"from_address": "klahnakoski@mozilla.com",
"to_address": "klahnakoski@mozilla.com",
"subject": "[ALERT][DEV] Problem in ETL Spot",
"$ref": "file://~/private.json#email"
},
{
"log_type": "console"
}
]
}}
Capturing logs
You can recieve a copy of all logs and send them to your own logging with
logger.set_logger(myLogger)
where myLogger
is an instance that can accept a calls to write(template, parameters)
. If your logging library can only handle strings, then use message = expand_template(template, params)
.
Problems with Python 2.x Logging
Python's default logging
module
comes close to doing the right thing, but fails:
- It has keyword parameters, but they are expanded at call time so the values are lost in a string.
- It has
extra
parameters, but they are lost if not used by the matchingFormatter
. - It even has stack trace with
exc_info
parameter, but only if an exception is being handled. - Python 2.x has no builtin exception chaining, but Python 3 does
More Reading
- Structured Logging is Good - https://sites.google.com/site/steveyegge2/the-emacs-problem
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file mo-logs-7.329.23004.tar.gz
.
File metadata
- Download URL: mo-logs-7.329.23004.tar.gz
- Upload date:
- Size: 42.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.9.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1e135ca1e67ca8b9a19147a60c1ee8b5ac283bbdc6bcecabf2eb242d4114dc1b |
|
MD5 | 44f9dad80104ccb2f17752351032bf93 |
|
BLAKE2b-256 | df8698769d8a3e4de77ffbef1ba87d706c612e6521c9be78f8db7f82ee6f4110 |