Skip to main content

Standardized structured (JSON) logging package with Sentry integration.

Project description

sentry-struct-logger

Standardized structured (JSON) logging package with built-in Sentry integration for Python applications.

Features

  • Structured JSON logging - Outputs to stdout for log aggregation systems (ELK, Datadog, etc.)
  • Automatic Sentry integration - Error tracking and performance monitoring with zero configuration
  • Automatic trace correlation - Logs across your entire call stack share the same trace_id
  • FastAPI automatic request tracing - Each HTTP request gets a unique trace ID
  • Clean schema - Standard fields at top level, custom fields nested in details

Requirements

  • Python >= 3.8
  • FastAPI (required for automatic request tracing)

Installation

uv add sentry-struct-logger

# With Lambda support (for AWS Lambda functions)
uv add "python-sentry-logger-wrapper[lambda] @ git+https://github.com/HEAL-Engineering/python-sentry-logger-wrapper.git"

# Or using pip
pip install sentry-struct-logger

Quick Start

Basic Usage with FastAPI

from fastapi import FastAPI
from python_sentry_logger_wrapper import get_logger
import os

# Initialize logger with Sentry BEFORE creating FastAPI app
logger = get_logger(
    service_name="api-service",
    sentry_dsn=os.getenv("SENTRY_DSN"),
    sentry_environment="production"
)

app = FastAPI()

@app.get("/users/{user_id}")
async def get_user(user_id: int):
    logger.info("Fetching user from API")
    user = await fetch_user(user_id)
    return user

async def fetch_user(user_id: int):
    logger.info("Querying database", user_id=user_id)
    return await db.query(...)

What happens automatically:

  • Each HTTP request gets a unique trace_id
  • ALL logs during that request share the same trace_id
  • Logs appear in stdout as JSON AND in Sentry with full request context
  • Errors are automatically linked to the request that caused them

Multi-Layer Application Example

# API Layer
from python_sentry_logger_wrapper import get_logger

logger = get_logger("example-api", sentry_dsn=os.getenv("SENTRY_DSN"))

@app.post("/items")
async def create_item(item: Item):
    logger.info("Creating item", item_count=len(item.data))
    result = await item_service.create(item)
    logger.info("Item created", item_id=result.id)
    return result

# Service Layer
logger = get_logger("example-api")

async def create(item: Item):
    logger.info("Validating item")
    await validate_data(item.data)

    logger.info("Processing external request")
    response = await external_service.process(item)

    if not response.success:
        logger.error("External service failed", reason=response.error_code)
        raise ServiceError(response.error_code)

    logger.info("Saving to database")
    return await db.items.create(item)

Console output (all logs share the same trace_id):

{"timestamp": "...", "log_level": "INFO", "logger": "example-api", "message": "Creating item", "trace_id": "abc123...", "span_id": "def456...", "details": {"item_count": 3}}
{"timestamp": "...", "log_level": "INFO", "logger": "example-api", "message": "Validating item", "trace_id": "abc123...", "span_id": "def456..."}
{"timestamp": "...", "log_level": "INFO", "logger": "example-api", "message": "Processing external request", "trace_id": "abc123...", "span_id": "def456..."}
{"timestamp": "...", "log_level": "INFO", "logger": "example-api", "message": "Saving to database", "trace_id": "abc123...", "span_id": "def456..."}

In Sentry:

  • Click the trace to see all logs in timeline
  • View request duration, endpoint, and status code
  • If an error occurs, see which request caused it with full context

Log Schema

Standard Fields (top-level)

  • timestamp - ISO 8601 UTC timestamp
  • log_level - INFO, WARNING, ERROR, etc.
  • logger - Your service identifier (the name passed to get_logger())
  • message - Log message
  • trace_id - Distributed tracing ID (only present when Sentry is enabled)
  • span_id - Span ID for the current operation (only present when Sentry is enabled)

Custom Fields (nested under details)

Any additional fields you pass are automatically nested:

logger.info("User login", user_id=123, ip="10.0.1.100", method="oauth")
# Output: {..., "details": {"user_id": 123, "ip": "10.0.1.100", "method": "oauth"}}

Sentry Configuration

Get Your DSN

  1. Create a project at sentry.io
  2. Copy your DSN from Settings → Client Keys
  3. Set it as an environment variable: export SENTRY_DSN="https://..."

Free Tier

Sentry offers 5,000 errors/events per month free - perfect for small projects.

Configuration Options

logger = get_logger(
    service_name="my-service",
    log_level=logging.INFO,  # Minimum log level for stdout
    sentry_dsn="https://...",  # Optional: enables Sentry
    sentry_environment="production",  # Optional: environment tag
    sentry_sample_rate=0.1  # Optional: sample 10% of traces (reduces costs)
)

What Gets Sent to Sentry

  • ERROR/CRITICAL logs - Sent as searchable events
  • INFO/WARNING logs - Sent as breadcrumbs (attached to errors for context)
  • All custom fields - Searchable in Sentry UI (e.g., details.user_id:123)
  • Request context - Automatic with FastAPI (URL, method, headers, duration)

AWS Lambda Usage

For AWS Lambda functions, enable the Lambda integration for automatic timeout warnings and Lambda-specific context:

from typing import Optional
from pydantic_settings import BaseSettings
from python_sentry_logger_wrapper import get_logger

class Settings(BaseSettings):
    """Lambda configuration using Pydantic BaseSettings."""
    service_name: str = "my-lambda"
    sentry_dsn: Optional[str] = None
    environment: str = "unknown"

    class Config:
        env_prefix = ""  # or use a prefix like "LAMBDA_"

settings = Settings()

logger = get_logger(
    service_name=settings.service_name,
    sentry_dsn=settings.sentry_dsn,
    sentry_environment=settings.environment,
    lambda_integration=True,       # Enables AwsLambdaIntegration
    lambda_timeout_warning=True,   # Warn before Lambda timeout (default: True)
)

def lambda_handler(event, context):
    logger.info("Processing event", event_type=event.get("triggerSource"))
    # Your Lambda logic here
    return {"statusCode": 200}

What happens automatically:

  • Lambda context (function name, memory, request ID) added to logs
  • Timeout warnings before Lambda times out
  • Errors captured with full Lambda context

Advanced Usage

Exception Handling

try:
    result = await process_data(item)
except ProcessingError as e:
    logger.error(
        "Data processing failed",
        error_type=type(e).__name__,
        item_id=item.id,
        exc_info=True  # Includes full stack trace
    )
    raise

Different Log Levels

logger.debug("Detailed debugging info", query="SELECT * FROM users")
logger.info("Normal operation", status="healthy")
logger.warning("Degraded performance", latency_ms=2500)
logger.error("Operation failed", retry_count=3)
logger.critical("System down", reason="database_unavailable")

FAQ

Q: Do I need to pass trace_id manually through my functions? A: No! It's automatically propagated through your entire call stack via Sentry's tracing.

Q: Can I use this without Sentry? A: Yes! Just omit sentry_dsn and you'll get JSON logs to stdout only.

Q: Does Sentry integration affect my JSON stdout logs? A: No, logs are sent to both Sentry AND stdout independently. Your log aggregation system still works.

Q: How do I search logs in Sentry? A: Custom fields are under details, so search like: details.user_id:123 or details.transaction_id:txn_*

Q: Can I use this without FastAPI? A: Yes, but automatic request tracing requires FastAPI. Without it, you'll need to manage trace context manually.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sentry_struct_logger-0.1.0.tar.gz (12.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sentry_struct_logger-0.1.0-py3-none-any.whl (10.8 kB view details)

Uploaded Python 3

File details

Details for the file sentry_struct_logger-0.1.0.tar.gz.

File metadata

  • Download URL: sentry_struct_logger-0.1.0.tar.gz
  • Upload date:
  • Size: 12.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for sentry_struct_logger-0.1.0.tar.gz
Algorithm Hash digest
SHA256 bad11fdefe39179491233a25a926760a0b40b91a74302ded18a533333e2b5055
MD5 7a928f657a692caf94324f1a7daec722
BLAKE2b-256 e6e2706550b9af74019d9377d3134cdcbc7c7c565673e334498794d1dbe2952f

See more details on using hashes here.

Provenance

The following attestation bundles were made for sentry_struct_logger-0.1.0.tar.gz:

Publisher: publish.yml on HEAL-Engineering/python-sentry-logger-wrapper

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file sentry_struct_logger-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for sentry_struct_logger-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2fd532f24a484ce4afcad8583a6795f4d85cabe34932339f1c5ff7cc43c066ce
MD5 59b098372f46b6f13f03d31a79857d76
BLAKE2b-256 c5295e3ca15613eb70f19cb0e8312058b1daec778447375668baffbfd667f854

See more details on using hashes here.

Provenance

The following attestation bundles were made for sentry_struct_logger-0.1.0-py3-none-any.whl:

Publisher: publish.yml on HEAL-Engineering/python-sentry-logger-wrapper

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page