Skip to main content

Python logging library for ingesting logs into LogiScout

Project description

LogiScout Logger

Structured logging for Python services, with intelligent batching and zero-config request correlation.

PyPI version Python versions License: MIT Downloads

Installation · Quick Start · Integrations · API Reference · Batching


Overview

logiscout-logger is a Python logging client for the LogiScout ingest platform. It is built on top of structlog and ships with first-class support for FastAPI, Flask, and Django — including automatic per-request correlation IDs and an intelligent batching layer that minimizes network overhead.

If you're already using structlog, the API will feel familiar. If you're not, the learning curve is small: init() once at startup, get_logger(__name__) everywhere else.

Highlights

  • Structured by default — every log carries a timestamp, level, logger name, and arbitrary metadata as JSON.
  • Intelligent batching — payloads are flushed when 200 logs accumulate or 30 seconds elapse, whichever comes first.
  • Automatic correlation — middleware tags every log emitted during a request with the same correlationId.
  • Framework-ready — drop-in middleware for ASGI (FastAPI, Starlette, Django ASGI) and WSGI (Flask, Django WSGI).
  • DEV / PROD modes — console-only in development, console + batched remote ingest in production.
  • Confidential logs — flag sensitive entries with send=False so they never leave the host.
  • Thread-safe — designed for concurrent web workers and high-throughput services.
  • Graceful shutdown — remaining logs are flushed automatically on process exit.

Installation

pip install logiscout-logger

Requirements

Dependency Version
Python >= 3.9
structlog >= 24.0.0
requests >= 2.28.0

Quick Start

from logiscout_logger import init, get_logger, PROD

# 1. Initialize once at app startup
init(
    api_token="your_api_key",
    service_name="my-service",
    env=PROD,
)

# 2. Get a logger anywhere in your codebase
logger = get_logger(__name__)

# 3. Log structured events
logger.info("User logged in", user_id=123)
logger.warning("Rate limit approaching", current=95, limit=100)
logger.error("Payment failed", order_id="abc-123", reason="insufficient_funds")

In DEV mode the same code prints to the console only — no network calls, no token required.

Framework Integrations

FastAPI

from fastapi import FastAPI
from logiscout_logger import init, get_logger, asgiConfiguration, PROD

app = FastAPI()

init(api_token="your_api_key", service_name="my-fastapi-app", env=PROD)
app.add_middleware(asgiConfiguration)

logger = get_logger("api")

@app.get("/users/{user_id}")
async def get_user(user_id: int):
    logger.info("Fetching user", user_id=user_id)
    return {"user_id": user_id}

Flask

from flask import Flask
from logiscout_logger import init, get_logger, wsgiConfiguration, PROD

app = Flask(__name__)

init(api_token="your_api_key", service_name="my-flask-app", env=PROD)
app.wsgi_app = wsgiConfiguration(app.wsgi_app)

logger = get_logger("api")

@app.route("/users/<int:user_id>")
def get_user(user_id):
    logger.info("Fetching user", user_id=user_id)
    return {"user_id": user_id}

Django

1. Initialize in settings.py:

from logiscout_logger import init, PROD

init(api_token="your_api_key", service_name="my-django-app", env=PROD)

2. Apply middleware in wsgi.py:

import os
from django.core.wsgi import get_wsgi_application
from logiscout_logger import wsgiConfiguration

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")

application = get_wsgi_application()
application = wsgiConfiguration(application)

For ASGI deployments (e.g. Uvicorn, Daphne), apply asgiConfiguration in asgi.py instead:

import os
from django.core.asgi import get_asgi_application
from logiscout_logger import asgiConfiguration

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")

application = get_asgi_application()
application = asgiConfiguration(application)

3. Use in views:

from logiscout_logger import get_logger

logger = get_logger(__name__)

def my_view(request):
    logger.info("Processing request", user_id=request.user.id)
    return JsonResponse({"status": "ok"})

Environment Modes

Mode Console output Remote ingest Batching Notes
DEV Ideal for local development. No api_token required.
PROD Logs are batched and shipped to the LogiScout endpoint.
from logiscout_logger import init, DEV, PROD

# Development — console only
init(api_token="...", service_name="my-service", env=DEV)

# Production — console + remote with batching
init(api_token="...", service_name="my-service", env=PROD)

Logging API

Levels

logger.debug("Detailed debug information")
logger.info("General information")
logger.warning("Warning message")
logger.error("Error message")
logger.critical("Critical error message")

Adding Metadata

Pass arbitrary keyword arguments — they are serialized into the structured log entry:

logger.info("Order created", order_id="123", total=99.99, currency="USD")

Bound Loggers

Bind context once and reuse it across calls:

user_logger = logger.bind(user_id=123, session_id="abc")
user_logger.info("User action", action="click")  # includes user_id and session_id

Confidential Logging

Use send=False to keep a log local to the host (still printed to the console, never transmitted):

logger.info("Password reset token generated", token="secret-token", send=False)
logger.error("Internal error details", stack_trace=trace, send=False)

This works on every level (debug, info, warning, error, critical).

Standalone Usage

The library can be used as a plain console logger without calling init():

from logiscout_logger import get_logger

logger = get_logger("my_script")
logger.info("Script started")
logger.warning("Disk space low", available_gb=1.5)

Nothing is sent to the network in this mode.

Batching

In PROD, request payloads are queued and flushed by the BatchManager:

  • Log-count trigger — flushes when total queued logs reach 200.
  • Time trigger — flushes every 30 seconds as long as the queue is non-empty.
  • Partial payloads — large requests are split across batches and re-stitched on the backend by correlationId.
  • Graceful shutdownatexit flushes any remaining logs on a clean process exit.

For the full design, batch wire format, and tuning knobs, see BATCHING_SYSTEM.md.

API Reference

init(api_token, service_name, env)

Initialize the LogiScout logger. Call once at app startup.

Parameter Type Description
api_token str API token for authenticating with the LogiScout ingest endpoint.
service_name str Service identifier — applied to every log produced in this process.
env Environment DEV (console only) or PROD (console + batched remote ingest).

get_logger(name)

Return a logger instance.

logger = get_logger(__name__)

LogiScoutLogger

logger.debug(msg: str, send: bool = True, **metadata)
logger.info(msg: str, send: bool = True, **metadata)
logger.warning(msg: str, send: bool = True, **metadata)
logger.error(msg: str, send: bool = True, **metadata)
logger.critical(msg: str, send: bool = True, **metadata)
logger.bind(**context) -> LogiScoutLogger

Middleware

from logiscout_logger import asgiConfiguration, wsgiConfiguration

# ASGI — FastAPI, Starlette, Django (ASGI)
app.add_middleware(asgiConfiguration)

# WSGI — Flask, Django (WSGI)
app.wsgi_app = wsgiConfiguration(app.wsgi_app)

How It Works

┌────────────────────┐    ┌─────────────────────┐    ┌──────────────────┐    ┌──────────────────┐
│  Application code  │ →  │   structlog chain   │ →  │   BatchManager   │ →  │   HTTPTransport  │
│  logger.info(...)  │    │ build_log_event,    │    │  200 logs / 30s  │    │   POST /ingest   │
│                    │    │ push_to_buffer, …   │    │  thread-safe     │    │   Bearer auth    │
└────────────────────┘    └─────────────────────┘    └──────────────────┘    └──────────────────┘
            │                                                                           ▲
            │                                                                           │
            └──────── ASGI / WSGI middleware adds correlationId ────────────────────────┘

Contributing

Issues and pull requests are welcome. Please open an issue first for non-trivial changes so we can align on direction.

  1. Fork the repository.
  2. Create a feature branch.
  3. Run the test suite (pytest).
  4. Submit a pull request describing the change and its motivation.

License

MIT © Abdur Rehman Kazim

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

logiscout_logger-1.0.1.tar.gz (17.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

logiscout_logger-1.0.1-py3-none-any.whl (24.7 kB view details)

Uploaded Python 3

File details

Details for the file logiscout_logger-1.0.1.tar.gz.

File metadata

  • Download URL: logiscout_logger-1.0.1.tar.gz
  • Upload date:
  • Size: 17.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for logiscout_logger-1.0.1.tar.gz
Algorithm Hash digest
SHA256 7783a8d3668fc818c78e0fe767dac578c287aa0357efd27293c541ae2bfa8f8e
MD5 914dabc29585533892be4abe98acaf14
BLAKE2b-256 d6d5a88131302d8f7ae3b6f29c93ebff1035c27bf9ed06d278c73963d13e8103

See more details on using hashes here.

File details

Details for the file logiscout_logger-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for logiscout_logger-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 8fd72c60e0948a9d92d24f2d69bc7fbbe3ebec4cf6b2aebde97d6b9cd1995c6f
MD5 a6e409da46549f45b3bdaf07e9222b3a
BLAKE2b-256 b5c9b6fb150f6306d811a9a113fa49e56812fdc146ae0683467083cf45d40d67

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page