Generic contextual logging for Python — a decorator/context-manager that auto-tags log lines with per-call context.
Project description
pyctxlog
Generic contextual logging for Python services. A small decorator (and underlying context manager) that wraps any sync or async function and auto-tags every log line inside with per-call context fields — request id, job name, tenant, trace id, anything you want.
pyctxlog is unopinionated about your framework. It works equally well as a
Django/FastAPI middleware, a Celery task wrapper, an RPC handler context, or
just on plain functions.
Why
Tracing what a single request, task or job did across many log lines is
painful when each line is just an unstructured string. pyctxlog solves
this with two primitives:
@log_context— wraps a function with a context block. Anything that logs through aContextualLoggerwhile the wrapped call is on the stack sees the context's fields rendered into every log line.LogContext— the underlyingwith-block primitive. Use it directly when you can't put a decorator on a function (e.g. inside framework middleware).
Both push their fields onto a contextvars.ContextVar, so the context is
correctly isolated across threads, asyncio tasks, and concurrent requests.
Install
pip install pyctxlog
Requires Python 3.9+. The only runtime dependency is ulid-py (used for
auto-generated request ids).
30-second quick start
import logging
from pyctxlog import ContextualLogger, log_context
log = ContextualLogger(
name="orders-api",
static_fields={ # constants rendered in every line
"service": "orders-api",
"env": "production",
},
enable_logging=True,
log_level=logging.INFO,
extra_fields=["job", "id"], # dynamic fields pulled from context
)
@log_context(fields={"job": "ingest_orders"}, logger=log)
def run_ingest(batch_id: str) -> int:
log.info(f"processing batch {batch_id}")
return 42
run_ingest("BATCH-001")
Output:
2026-04-10 12:34:56 | INFO | orders-api | production | job=ingest_orders | id=01HQ... | 🚀 run_ingest() called 🚀
2026-04-10 12:34:56 | INFO | orders-api | production | job=ingest_orders | id=01HQ... | ✅ processing batch BATCH-001 ✅
2026-04-10 12:34:56 | INFO | orders-api | production | job=ingest_orders | id=01HQ... | 🕛 run_ingest() completed in 0.0001s 🕛
static_fields are baked into the logger at construction time and rendered
verbatim on every line. extra_fields are dynamic: their values are pulled
fresh from the active context (set by LogContext, BaseLogContext, or
@log_context) each time the logger emits a record.
The library makes no assumption about what your fields mean — there's no
hardcoded service_name, environment, tenant or anything else. Pass
whatever flat key→value pairs make sense for your project.
Async works the same way
@log_context(fields={"job": "ingest_orders"}, logger=log)
async def run_ingest_async(batch_id: str) -> int:
log.info(f"processing batch {batch_id}")
return 42
The decorator detects coroutine functions via inspect.iscoroutinefunction
and returns the right wrapper automatically.
Decorator options
| Option | Default | What it does |
|---|---|---|
fields |
None |
Static dict of context fields |
context_cls |
None |
A BaseLogContext subclass — built from the matching kwargs of the wrapped call |
logger |
None |
Which ContextualLogger to log entry/exit/error through |
name |
func.__qualname__ |
Display name for entry/exit lines |
capture_exceptions |
True |
Log unhandled exceptions with traceback then re-raise |
log_args |
False |
Log the function arguments on entry |
log_result |
False |
Log the return value on exit |
slow_threshold_seconds |
None |
Emit a warning if the call takes at least this long |
id_generator |
ULID | Callable returning the auto-generated context id |
extra |
None |
Alias for fields, kept for symmetry with stdlib logging |
Global configuration
If you find yourself passing the same logger=... (or capture_exceptions=,
log_args=, slow_threshold_seconds=, ...) to every @log_context call,
you can set those as module-level defaults once at startup with
configure():
from pyctxlog import ContextualLogger, configure, log_context
log = ContextualLogger(
name="orders-api",
static_fields={"service": "orders-api", "env": "prod"},
extra_fields=["job", "id"],
)
configure(
logger=log, # default logger for every @log_context
capture_exceptions=True,
log_args=False,
slow_threshold_seconds=2.0, # warn on any call >= 2s
default_fields={"service_color": "blue"}, # auto-tag every call
)
@log_context(fields={"job": "ingest"}) # no logger= needed now
def run_ingest(batch_id: str) -> int:
...
Per-call arguments always win over config. To explicitly override a
configured default back to "nothing" on a single decorator, pass None:
@log_context(fields={"job": "quiet"}, logger=None) # suppress the configured logger
def run_quiet():
...
The full list of overridable keys: logger, capture_exceptions, log_args,
log_result, slow_threshold_seconds, id_generator, id_field,
default_fields. Call reset_config() to restore the factory defaults
(useful in tests).
Typed contexts via BaseLogContext
For long-lived shapes (e.g. an HTTP request) you can declare a typed context once and reuse it everywhere:
from dataclasses import dataclass
from pyctxlog import BaseLogContext, log_context
@dataclass
class HttpRequestContext(BaseLogContext):
method: str = ""
path: str = ""
user_id: str = "anonymous"
def __post_init__(self):
BaseLogContext.__init__(self)
@log_context(context_cls=HttpRequestContext, logger=log)
def handle_request(method: str, path: str, user_id: str):
log.info("handling request")
When context_cls is set, the decorator inspects the wrapped function's
arguments and builds the context from any matching parameter names.
Use as a context manager (for middleware)
@log_context is a thin wrapper around LogContext. When you can't put a
decorator on a function (e.g. inside framework middleware), use the context
manager directly:
from pyctxlog import LogContext
with LogContext({"job": "ingest", "tenant": "acme"}):
log.info("starting work") # auto-tagged with job + tenant + id
Recipes
Django middleware
from pyctxlog import LogContext, ContextualLogger
log = ContextualLogger(
name="my-django-app",
static_fields={"service": "my-django-app"},
extra_fields=["request_id", "method", "path", "user_id"],
)
class PyCtxLogMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
with LogContext({
"method": request.method,
"path": request.path,
"user_id": getattr(getattr(request, "user", None), "id", "anonymous"),
}):
return self.get_response(request)
Then add it to MIDDLEWARE in settings.py.
FastAPI middleware
from fastapi import FastAPI, Request
from pyctxlog import LogContext, ContextualLogger
log = ContextualLogger(
name="my-fastapi-app",
static_fields={"service": "my-fastapi-app"},
extra_fields=["request_id", "method", "path"],
)
app = FastAPI()
@app.middleware("http")
async def pyctxlog_middleware(request: Request, call_next):
async with LogContext({
"method": request.method,
"path": request.url.path,
}):
return await call_next(request)
Celery task
from pyctxlog import log_context
@app.task
@log_context(fields={"job": "send_invoice_email"}, logger=log, log_args=True)
def send_invoice_email(invoice_id: str, recipient: str):
...
RabbitMQ RPC handler
def on_rpc_message(channel, method, properties, body):
with LogContext({
"rpc_routing_key": method.routing_key,
"correlation_id": properties.correlation_id,
"reply_to_queue": properties.reply_to,
}):
handle_rpc(body)
API reference
from pyctxlog import (
ContextualLogger, # the configured logger
log_context, # the decorator
LogContext, # context manager primitive
BaseLogContext, # base for typed contexts
app_context, # the underlying ContextVar
get_current_context, # snapshot of currently-active fields
Config, # the config dataclass
configure, # set module-level decorator defaults
get_config, # read the current config
reset_config, # restore factory defaults
__version__,
)
Development
git clone https://github.com/youssefmramzy/pyctxlog.git
cd pyctxlog
python -m venv .venv && source .venv/bin/activate
pip install -e .[dev]
pytest -v
ruff check src tests
mypy src/pyctxlog
Building & publishing
python -m build # produces dist/pyctxlog-X.Y.Z-py3-none-any.whl + .tar.gz
twine upload --repository testpypi dist/* # smoke test on TestPyPI first
twine upload dist/* # then publish for real
License
MIT — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pyctxlog-0.1.0.tar.gz.
File metadata
- Download URL: pyctxlog-0.1.0.tar.gz
- Upload date:
- Size: 21.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6c5057a247f06b07484236263d56515ffe09b2b0e2eeb7ad9e9e029f49abbffc
|
|
| MD5 |
7aa95000a148b1f1e62d14cd05f5fb2a
|
|
| BLAKE2b-256 |
e7cc19d89831b08f92a31b2f2b77660e7f8f2122c4af48f35c90484840760361
|
File details
Details for the file pyctxlog-0.1.0-py3-none-any.whl.
File metadata
- Download URL: pyctxlog-0.1.0-py3-none-any.whl
- Upload date:
- Size: 19.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3011a8ca6d9bc59c7d09931608e4e603346638d956378414958c78a7795eefa9
|
|
| MD5 |
4393a5b706b8761d5f9396ae00c7b3e9
|
|
| BLAKE2b-256 |
3fb91f1331b7723f4aa87ce5b007466076107a10f8c031f8b90c583f9cce618c
|