Skip to main content

Real-time browser dashboard for Python logging — zero config, non-blocking

Project description

pulselog

A Python logging library that streams every logger.info() to a live real-time dashboard in your browser — with zero configuration.

pip install pulselog

Quick start

from pulselog import Logger

logger = Logger("my-app")

logger.info("training started", epoch=1)
logger.warning("learning rate too high", lr=0.1)
logger.save("epoch-1", {"acc": 0.91, "loss": 0.23}, status="DONE", progress=33)
logger.shutdown()

A browser tab opens automatically at http://localhost:5678 showing all logs in real time.

Why pulselog?

Standard logging solutions block the main thread on every log call — waiting for a file write, HTTP request, or DB insert. In tight loops (ML training, inference, data pipelines) this kills performance.

pulselog uses a non-blocking in-memory queue + daemon background worker. The main thread never waits. Log calls cost ~2µs. 1 million calls complete in under 2 seconds.

Dashboard

The dashboard is a single self-contained HTML file served over WebSocket — no build step, no CDN, no framework.

Logs tab:

  • Colour-coded by level (DEBUG=gray, INFO=blue, WARNING=amber, ERROR/CRITICAL=red)
  • Level filter + full-text search
  • Virtual list rendering — handles 100k+ logs with zero browser lag
  • Auto-scroll with manual scroll override
  • Export all logs as JSON

Checkpoints tab:

  • Progress bars for each checkpoint
  • Overall progress = average of all checkpoint progress values
  • Expandable JSON data viewer
  • Status icons: DONE ✅, IN_PROGRESS 🟡, FAILED 🔴, SKIPPED ⚫

API

Initialisation

logger = Logger(
    name,                          # shown in dashboard
    host="localhost",              # dashboard bind host
    port=5678,                     # auto-increments if port is taken
    auto_open=True,                # open browser automatically
    dashboard=True,                # set False for production/CI
    checkpoint_path=".pulselog/checkpoints.db",
    level="DEBUG",                 # minimum capture level
    worker_interval=0.05,          # drain interval in seconds
)

Logging

logger.debug(msg, **extra)
logger.info(msg, **extra)
logger.warning(msg, **extra)
logger.error(msg, **extra)
logger.critical(msg, **extra)

# Extra kwargs appear as metadata in the dashboard
logger.info("request handled", user_id=42, latency_ms=12)

Checkpoints

logger.save(
    name,                # checkpoint identifier
    data,                # any JSON-serialisable dict
    status="DONE",       # "DONE"|"IN_PROGRESS"|"FAILED"|"SKIPPED"
    note="",             # human-readable description
    progress=None        # 0–100, shown as progress bar
)

result = logger.load("epoch-5")     # → dict | None (never raises)
names  = logger.checkpoints()       # → list[str]
logger.delete_checkpoint("epoch-3")

Utilities

logger.tag("phase-2")           # group following logs under a label
logger.divider("epoch boundary") # insert visual divider in dashboard
logger.flush()                   # force-drain queue (call before exit)
logger.shutdown()                # graceful teardown
stats = logger.stats()           # operational metrics dict

Stats

{
    "records_logged":   int,
    "records_dropped":  int,     # due to queue overflow
    "queue_size":       int,
    "checkpoints_saved": int,
    "dashboard_clients": int,
    "uptime_seconds":   float,
}

Configuration

Priority order (highest → lowest): Logger() kwargs > env vars > pulselog.toml > defaults

Environment variables

PULSELOG_DASHBOARD=false
PULSELOG_HOST=0.0.0.0
PULSELOG_PORT=8080
PULSELOG_AUTO_OPEN=false
PULSELOG_CHECKPOINT_PATH=/data/checkpoints.db
PULSELOG_LEVEL=INFO
PULSELOG_WORKER_INTERVAL=0.01

pulselog.toml (place in CWD)

[pulselog]
host = "0.0.0.0"
port = 8080
auto_open = false
level = "INFO"

stdlib logging integration

import logging
from pulselog.handler import PulseHandler

logging.getLogger().addHandler(PulseHandler("my-app"))
logging.info("this appears in the pulselog dashboard")

Production usage

# In production: disable dashboard, keep checkpoints
logger = Logger("prod", dashboard=False, checkpoint_path="/data/checkpoints.db")

With dashboard=False:

  • No threads are started
  • No port is bound
  • No browser is opened
  • Checkpoint reads/writes still work
  • Log calls return in ~100ns (level check only)

Performance

Operation Throughput
logger.info() call ~2µs
1 million log calls <2s
Queue put() O(1), <1µs
Dashboard at 100k logs No lag (virtual list)

Design

logger.info()           ← O(1), non-blocking
     │
     ▼
  LogQueue              ← deque(maxlen=10_000), thread-safe
     │
     ▼ every 50ms
BackgroundWorker        ← daemon thread
     │
     ├─▶ DashboardServer.broadcast()
     │       └─▶ WebSocket clients (all connected browsers)
     │
     └─▶ (additional handlers)

The queue uses collections.deque(maxlen=N) — when full, the oldest record is silently dropped (never blocks). Dropped records are counted in logger.stats().

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pulselog-0.1.2.tar.gz (34.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pulselog-0.1.2-py3-none-any.whl (29.1 kB view details)

Uploaded Python 3

File details

Details for the file pulselog-0.1.2.tar.gz.

File metadata

  • Download URL: pulselog-0.1.2.tar.gz
  • Upload date:
  • Size: 34.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.9

File hashes

Hashes for pulselog-0.1.2.tar.gz
Algorithm Hash digest
SHA256 b5b2188ea49a352e49c3cc5da1a59ac7ec5b3724a88c47df25fbb8cef362f09c
MD5 f5ac448881bd6dcdb8440ec9818688ee
BLAKE2b-256 fd714e3f5aac3ee309dd8f3ecfb663c9e0ed61fb648c6e71570999085bd739cb

See more details on using hashes here.

File details

Details for the file pulselog-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: pulselog-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 29.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.9

File hashes

Hashes for pulselog-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 085b1f79f5252e2f90601c1b2987cfa4d16669c2d76e6d48ccd2a5207fc1a7d9
MD5 fc03bb4f617add007eb24f0902f64c5c
BLAKE2b-256 062fb7375c09ac08e205bd18039d70e209061e2a6c3e663aedbdbdba3756e02a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page