Skip to main content

Real-time browser dashboard for Python logging — zero config, non-blocking

Project description

pulselog

A Python logging library that streams every logger.info() to a live real-time dashboard in your browser — with zero configuration.

pip install pulselog

Quick start

from pulselog import Logger

logger = Logger("my-app")

logger.info("training started", epoch=1)
logger.warning("learning rate too high", lr=0.1)
logger.save("epoch-1", {"acc": 0.91, "loss": 0.23}, status="DONE", progress=33)
logger.shutdown()

A browser tab opens automatically at http://localhost:5678 showing all logs in real time.

Why pulselog?

Standard logging solutions block the main thread on every log call — waiting for a file write, HTTP request, or DB insert. In tight loops (ML training, inference, data pipelines) this kills performance.

pulselog uses a non-blocking in-memory queue + daemon background worker. The main thread never waits. Log calls cost ~2µs. 1 million calls complete in under 2 seconds.

Dashboard

The dashboard is a single self-contained HTML file served over WebSocket — no build step, no CDN, no framework.

Logs tab:

  • Colour-coded by level (DEBUG=gray, INFO=blue, WARNING=amber, ERROR/CRITICAL=red)
  • Level filter + full-text search
  • Virtual list rendering — handles 100k+ logs with zero browser lag
  • Auto-scroll with manual scroll override
  • Export all logs as JSON

Checkpoints tab:

  • Progress bars for each checkpoint
  • Overall progress = average of all checkpoint progress values
  • Expandable JSON data viewer
  • Status icons: DONE ✅, IN_PROGRESS 🟡, FAILED 🔴, SKIPPED ⚫

API

Initialisation

logger = Logger(
    name,                          # shown in dashboard
    host="localhost",              # dashboard bind host
    port=5678,                     # auto-increments if port is taken
    auto_open=True,                # open browser automatically
    dashboard=True,                # set False for production/CI
    checkpoint_path=".pulselog/checkpoints.db",
    level="DEBUG",                 # minimum capture level
    worker_interval=0.05,          # drain interval in seconds
)

Logging

logger.debug(msg, **extra)
logger.info(msg, **extra)
logger.warning(msg, **extra)
logger.error(msg, **extra)
logger.critical(msg, **extra)

# Extra kwargs appear as metadata in the dashboard
logger.info("request handled", user_id=42, latency_ms=12)

Checkpoints

logger.save(
    name,                # checkpoint identifier
    data,                # any JSON-serialisable dict
    status="DONE",       # "DONE"|"IN_PROGRESS"|"FAILED"|"SKIPPED"
    note="",             # human-readable description
    progress=None        # 0–100, shown as progress bar
)

result = logger.load("epoch-5")     # → dict | None (never raises)
names  = logger.checkpoints()       # → list[str]
logger.delete_checkpoint("epoch-3")

Utilities

logger.tag("phase-2")           # group following logs under a label
logger.divider("epoch boundary") # insert visual divider in dashboard
logger.flush()                   # force-drain queue (call before exit)
logger.shutdown()                # graceful teardown
stats = logger.stats()           # operational metrics dict

Stats

{
    "records_logged":   int,
    "records_dropped":  int,     # due to queue overflow
    "queue_size":       int,
    "checkpoints_saved": int,
    "dashboard_clients": int,
    "uptime_seconds":   float,
}

Configuration

Priority order (highest → lowest): Logger() kwargs > env vars > pulselog.toml > defaults

Environment variables

PULSELOG_DASHBOARD=false
PULSELOG_HOST=0.0.0.0
PULSELOG_PORT=8080
PULSELOG_AUTO_OPEN=false
PULSELOG_CHECKPOINT_PATH=/data/checkpoints.db
PULSELOG_LEVEL=INFO
PULSELOG_WORKER_INTERVAL=0.01

pulselog.toml (place in CWD)

[pulselog]
host = "0.0.0.0"
port = 8080
auto_open = false
level = "INFO"

stdlib logging integration

import logging
from pulselog.handler import PulseHandler

logging.getLogger().addHandler(PulseHandler("my-app"))
logging.info("this appears in the pulselog dashboard")

Production usage

# In production: disable dashboard, keep checkpoints
logger = Logger("prod", dashboard=False, checkpoint_path="/data/checkpoints.db")

With dashboard=False:

  • No threads are started
  • No port is bound
  • No browser is opened
  • Checkpoint reads/writes still work
  • Log calls return in ~100ns (level check only)

Performance

Operation Throughput
logger.info() call ~2µs
1 million log calls <2s
Queue put() O(1), <1µs
Dashboard at 100k logs No lag (virtual list)

Design

logger.info()           ← O(1), non-blocking
     │
     ▼
  LogQueue              ← deque(maxlen=10_000), thread-safe
     │
     ▼ every 50ms
BackgroundWorker        ← daemon thread
     │
     ├─▶ DashboardServer.broadcast()
     │       └─▶ WebSocket clients (all connected browsers)
     │
     └─▶ (additional handlers)

The queue uses collections.deque(maxlen=N) — when full, the oldest record is silently dropped (never blocks). Dropped records are counted in logger.stats().

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pulselog-0.1.1.tar.gz (31.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pulselog-0.1.1-py3-none-any.whl (25.2 kB view details)

Uploaded Python 3

File details

Details for the file pulselog-0.1.1.tar.gz.

File metadata

  • Download URL: pulselog-0.1.1.tar.gz
  • Upload date:
  • Size: 31.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.9

File hashes

Hashes for pulselog-0.1.1.tar.gz
Algorithm Hash digest
SHA256 bebad45dfb85e0ba5e3ee4b0396e8e8c99a725edc0138f6bf8f077593eea6341
MD5 7a28038386e69c1e50d229a1b119b938
BLAKE2b-256 e80627e14e88d78d994b27ebe83b4a79771c4d490bf7ab659f95cf04d9881ef3

See more details on using hashes here.

File details

Details for the file pulselog-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: pulselog-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 25.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.9

File hashes

Hashes for pulselog-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b59e540aac6e3ed2a6e405189efeea8573a5c582ff7d4c145e6169095e8f283e
MD5 8a92a10790128a8f40919cf87160423a
BLAKE2b-256 77698c934fa70defcac81328a5f2cdf63e2a3d6fb74aaff361204a161bef51ca

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page