Opinionated structlog configuration for long-running asyncio servers
Project description
structlog-opinionated
Opinionated structlog configuration for asyncio servers.
Overview
structlog-opinionated provides a pre-configured, production-ready setup for structlog tailored specifically for long-running asyncio servers. It prescribes conventions and sensible defaults to get you up and running quickly with structured logging and provides tooling needed as complexity scales.
Features
- Enhanced Context Management API: Clean logger interface with
.bind(),.context(), and.attach()methods (builds on structlog'sbound_contextvars) - Asyncio-optimized: Configured for use with async/await patterns
- Production-ready: Sensible defaults for production environments
- Smart output detection: Auto-selects JSON or colored console output based on TTY detection
- Structured exception logging: Exceptions captured as queryable JSON with stack frames
- Call-site tracking: Automatic file path, line number, and function name in every log
- Scoped context binding: Easy context management for request tracking across async boundaries with automatic cleanup
- File logging: Optional timestamped JSONL file output
- Module-specific debugging: Enable debug logging for specific modules
- Environment variables: Configure via environment variables or .env files
- Type-safe: Full type hints for better IDE support
- Zero-config: Works out of the box with sensible defaults
Installation
uv add structlog-opinionated
Or with pip:
pip install structlog-opinionated
Quick Start
import structlog_opinionated
# Get a logger instance (setup is automatic!)
logger = structlog_opinionated.get_logger(__name__)
# Add permanent bindings - .bind() returns NEW logger, so reassign!
logger = logger.bind(service="api", version="1.0")
# Temporary scoped bindings with auto-cleanup
with logger.context(request_id="req_123", user_id=42):
logger.info("Processing request", action="create")
# Logs include: service, version, request_id, user_id, action
# Attach more context mid-block as needed
logger.attach(validation_step="schema")
logger.info("Validating input")
# Logs include: service, version, request_id, user_id, validation_step
# request_id, user_id, and validation_step all automatically removed
logger.info("Request completed")
# Logs include: service, version only
Why This Library? Enhanced Context Management
While structlog provides excellent building blocks like bound_contextvars(), bind_contextvars(), and unbind_contextvars(), this library provides a more ergonomic, logger-centric API with additional capabilities:
What structlog Provides
import structlog
# Option 1: Use bound_contextvars context manager
with structlog.contextvars.bound_contextvars(request_id="123"):
logger.info("processing") # Has request_id
# Problem: bound_contextvars doesn't clean up mid-block additions
structlog.contextvars.bind_contextvars(step="validation")
logger.info("validating") # Has request_id, step
# request_id cleaned up, but step persists (leaked!)
logger.info("done") # Still has step
# Option 2: Manual token management
tokens = structlog.contextvars.bind_contextvars(request_id="123")
try:
logger.info("processing")
finally:
structlog.contextvars.reset_contextvars(**tokens) # Verbose cleanup
What This Library Adds
1. Logger-Centric API - Context management is a method on the logger object:
logger = get_logger(__name__)
# More intuitive: context is part of the logger
with logger.context(request_id="123"):
logger.info("processing")
2. The .attach() Method - Add context mid-block with automatic cleanup:
with logger.context(request_id="123"):
logger.info("starting")
# Attach more context - automatically cleaned up when context exits!
logger.attach(step="validation", input_size=1024)
logger.info("validating") # Has request_id, step, input_size
# ALL variables cleaned up automatically (request_id, step, input_size)
logger.info("done") # Clean slate
3. Proper Nested Context Restoration:
with logger.context(operation="outer", depth=1):
logger.info("outer") # operation="outer", depth=1
with logger.context(operation="inner", depth=2):
logger.info("inner") # operation="inner", depth=2
logger.info("back") # Restored to operation="outer", depth=1
Summary: Three Context Methods
| Method | Scope | Returns | Use Case |
|---|---|---|---|
.bind(**kw) |
Permanent | New logger | Application/function-level context |
.context(**kw) |
Block scope | None | Operation-scoped temporary context |
.attach(**kw) |
Current context block | None | Add context mid-operation |
Key Insight: .context() + .attach() together provide a complete solution for scoped context management with automatic cleanup, building on structlog's bound_contextvars() but handling the .attach() use case that bound_contextvars() doesn't address.
Configuration
Note: Configuration happens automatically on the first call to get_logger(). You only need to call setup() explicitly if you want to customize the configuration before getting a logger.
Output Modes
The library automatically detects the environment and chooses the appropriate output format:
- Interactive terminal (TTY): Colored console output for human readability
- Piped/redirected/production: JSON output for log aggregation
- File logging: Always uses JSON format (JSONL)
You can override the TTY detection with force_json=True.
Configure via code or environment variables:
Programmatic Configuration
from structlog_opinionated import LogConfig, setup, get_logger
# Option 1: Explicit setup with custom config (call before get_logger)
config = LogConfig(
level="DEBUG",
debug={"main": True, "harness.processor": True},
force_json=False,
file_prefix="/var/log/myapp", # Enable file logging
)
setup(config)
logger = get_logger(__name__)
# Option 2: Just use environment variables and let auto-setup handle it
# export LOG_LEVEL=DEBUG
# export LOG_FILE_PREFIX=/var/log/myapp
logger = get_logger(__name__) # Auto-setup with env config
Environment Variables
All configuration options can be set via environment variables with the LOG_ prefix:
export LOG_LEVEL=DEBUG
export LOG_DEBUG__MAIN=1
export LOG_DEBUG__HARNESS_PROCESSOR=1
export LOG_FORCE_JSON=true
export LOG_FILE_PREFIX=/var/log/myapp
You can also use a .env file in your project root:
# .env
LOG_LEVEL=INFO
LOG_DEBUG__MAIN=1
LOG_DEBUG__API_HANDLER=1
LOG_FILE_PREFIX=/var/log/myapp
Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
level |
str | "INFO" |
Minimum log level (DEBUG, INFO, WARNING, ERROR, CRITICAL) |
debug |
dict | {} |
Module-specific debug logging (e.g., LOG_DEBUG__MAIN=1) |
force_json |
bool | False |
Force JSON output even in TTY/console environments |
file_prefix |
str | None | None |
File prefix for JSONL log files (e.g., /var/log/myapp creates /var/log/myapp_2025-10-09_11-30-45.jsonl) |
Module-Specific Debug Logging
The debug configuration allows you to enable debug-level logging for specific modules without changing the global log level. This is useful for debugging specific parts of your application in production.
Using Environment Variables:
# Enable debug logging for the "main" module
export LOG_DEBUG__MAIN=1
# Enable debug logging for the "harness.processor" module
# Note: Use double underscores to represent dots in module names
export LOG_DEBUG__HARNESS_PROCESSOR=1
Using Code:
config = LogConfig(
level="INFO", # Global level is INFO
debug={
"main": True,
"harness.processor": True,
}
)
Note: The implementation of module-specific debug filtering will be added in a future update.
Context Management
Context variables persist across async boundaries and are automatically included in all log messages.
Three Ways to Add Context
The logger provides three mechanisms for adding context, each optimized for different use cases:
.bind() - Returns New Logger with Permanent Bindings
IMPORTANT: .bind() does NOT modify the logger. It returns a new logger instance with additional context. This is an immutable/functional approach.
import structlog_opinionated
structlog_opinionated.setup()
# Start with base logger
logger = structlog_opinionated.get_logger(__name__)
# .bind() returns a NEW logger - reassign the variable
logger = logger.bind(service="api", version="1.0")
logger.info("Event 1") # Includes service, version
# Chain multiple bindings by reassigning
logger = logger.bind(environment="production")
logger.info("Event 2") # Includes service, version, environment
# Pass logger around - each function can add its own bindings
def handle_request(logger, request_id):
# Create request-specific logger (new instance)
logger = logger.bind(request_id=request_id)
logger.info("Processing") # Includes all parent bindings + request_id
return logger
logger = handle_request(logger, "req_123")
logger.info("Done") # Still has request_id
.context() - Temporary Scoped Bindings with Auto-Cleanup
Use for operation-scoped context that should be automatically cleaned up:
# Temporary bindings - automatically removed on exit
with logger.context(operation="validate", input_size=1024):
logger.info("Starting validation") # Includes operation, input_size
await some_async_operation() # Context persists across await
logger.info("Validation complete") # Still includes operation, input_size
# operation and input_size automatically removed
logger.info("Next operation") # Only permanent bindings from .bind()
# Nested contexts work correctly with value restoration
with logger.context(operation="outer", depth=1):
logger.info("Outer") # operation="outer", depth=1
with logger.context(operation="inner", depth=2):
logger.info("Inner") # operation="inner", depth=2
logger.info("Back to outer") # Restored: operation="outer", depth=1
.attach() - Add Context Mid-Block with Auto-Cleanup
Use when you need to add context partway through a .context() block. Attached values are automatically cleaned up when the enclosing .context() exits:
with logger.context(request_id="req_123", user_id=42):
logger.info("Request started")
# Logs: request_id, user_id
# Conditionally attach more context mid-block
if needs_validation:
logger.attach(validation_step="email", validator="regex")
logger.info("Validating email")
# Logs: request_id, user_id, validation_step, validator
# Attach can be called multiple times
logger.attach(stage="processing", progress=50)
logger.info("Processing halfway")
# Logs: request_id, user_id, validation_step, validator, stage, progress
# ALL attached variables cleaned up automatically
logger.info("Request complete")
# Logs: only permanent .bind() context (if any)
When to use .attach() vs .context():
- Use
.context()when you know the context upfront at the start of a block - Use
.attach()when context becomes available conditionally or mid-operation - Both clean up automatically when the
.context()block exits
Note: .attach() can be called outside a .context() block, but then it behaves like structlog.contextvars.bind_contextvars() - the context persists until manually unbound or cleaned up by a future .context() exit.
Combining .bind(), .context(), and .attach()
# Application-level permanent bindings
logger = get_logger(__name__)
logger = logger.bind(service="api", version="1.0")
def process_task(logger, task_id):
# Function-level permanent binding (new logger instance)
logger = logger.bind(task_id=task_id)
# Operation-level temporary bindings
with logger.context(status="processing"):
logger.info("Task started")
# Logs: service, version, task_id, status
# Attach context as it becomes available
logger.attach(step="validation")
result = validate_task()
# Logs: service, version, task_id, status, step
logger.attach(step="execution", input_hash=hash(result))
execute_task(result)
# Logs: service, version, task_id, status, step="execution", input_hash
logger.info("Task completed")
# Logs: service, version, task_id
# (status, step, input_hash all cleaned up)
return logger # Return the task-bound logger if needed
logger = process_task(logger, "task_789")
Advanced Features
Stack Traces for Debugging
You can include the current call stack in your logs using the stack_info parameter. This is useful for debugging to see how execution reached a particular log statement, without needing an exception to occur.
logger = structlog_opinionated.get_logger(__name__)
def complex_operation():
# Include call stack to understand execution flow
logger.info("Checkpoint reached", stack_info=True, step=3)
The log will include the full call stack showing the path of execution that led to this log statement. This is particularly useful when debugging complex async workflows or understanding unexpected code paths in production.
Note: Stack traces add significant overhead and verbosity - use sparingly and primarily for debugging specific issues.
Exception Logging
Exceptions are automatically captured and formatted as structured data when using exc_info=True or the .exception() method:
try:
result = risky_operation()
except ValueError as e:
logger.error("Operation failed", exc_info=True, operation="risky")
# Or use the convenience method:
logger.exception("Operation failed", operation="risky")
Exception data includes:
- exc_type: Exception class name (e.g., "ValueError")
- exc_value: Exception message
- frames: List of stack frames with filename, line number, function name, and local variables
This structured format makes it easy to query and analyze exceptions in log aggregation tools.
Vertical Console Output
For development environments where you want easier-to-read logs, you can use the VerticalConsoleRenderer which displays key-value pairs on separate lines:
from structlog_opinionated import VerticalConsoleRenderer
import structlog
import logging.config
import sys
# Configure with vertical renderer
logging.config.dictConfig({
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"vertical": {
"()": structlog.stdlib.ProcessorFormatter,
"processors": [
structlog.stdlib.ProcessorFormatter.remove_processors_meta,
VerticalConsoleRenderer(
colors=True, # Enable colored output
pad_event=50, # Pad event message
indent=" ", # Indentation for fields
),
],
"foreign_pre_chain": [...], # Your shared processors
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "vertical",
"stream": sys.stdout,
},
},
"loggers": {
"": {"handlers": ["console"], "level": "INFO"},
},
})
Output example:
2025-10-09 12:34:56 [info ] Processing request
duration_ms: 45.2
method: POST
path: /api/users
request_id: req_789
status_code: 201
This is particularly useful for development and debugging where readability is more important than compact output.
Development
This project uses uv for dependency management.
# Install dependencies
uv sync
# Run tests
uv run pytest
# Format code
uv run ruff format
# Lint code
uv run ruff check
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file structlog_opinionated-0.1.0.tar.gz.
File metadata
- Download URL: structlog_opinionated-0.1.0.tar.gz
- Upload date:
- Size: 13.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
19970b2185b74fdc3b1aa8f0cd9507731ca130ceae421ba33906a01f728377c7
|
|
| MD5 |
d82b4fea1344125cdbb63e66213de998
|
|
| BLAKE2b-256 |
f501e031122b0764ad77ea2101820291a3d6912cceb058540330959dc5b9f462
|
File details
Details for the file structlog_opinionated-0.1.0-py3-none-any.whl.
File metadata
- Download URL: structlog_opinionated-0.1.0-py3-none-any.whl
- Upload date:
- Size: 17.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e8e633570557d0dc0109a6c3031d1af4afd1e8cfd0b06807976985dc3422c9e6
|
|
| MD5 |
6120dc4efdbe50a73e86d28c5142a1af
|
|
| BLAKE2b-256 |
104f59f8afd9825df646817ba37a4fa13f18ef71189fc47444d2ef10205c255f
|