Real-time browser dashboard for Python logging — zero config, non-blocking
Project description
pulselog
A Python logging library that streams every logger.info() to a live real-time dashboard in your browser — with zero configuration.
pip install pulselog
Quick start
from pulselog import Logger
logger = Logger("my-app")
logger.info("training started", epoch=1)
logger.warning("learning rate too high", lr=0.1)
logger.save("epoch-1", {"acc": 0.91, "loss": 0.23}, status="DONE", progress=33)
logger.shutdown()
A browser tab opens automatically at http://localhost:5678 showing all logs in real time.
Why pulselog?
Standard logging solutions block the main thread on every log call — waiting for a file write, HTTP request, or DB insert. In tight loops (ML training, inference, data pipelines) this kills performance.
pulselog uses a non-blocking in-memory queue + daemon background worker. The main thread never waits. Log calls cost ~2µs. 1 million calls complete in under 2 seconds.
Dashboard
The dashboard is a single self-contained HTML file served over WebSocket — no build step, no CDN, no framework.
Logs tab:
- Colour-coded by level (DEBUG=gray, INFO=blue, WARNING=amber, ERROR/CRITICAL=red)
- Level filter + full-text search
- Virtual list rendering — handles 100k+ logs with zero browser lag
- Auto-scroll with manual scroll override
- Export all logs as JSON
Checkpoints tab:
- Progress bars for each checkpoint
- Overall progress = average of all checkpoint progress values
- Expandable JSON data viewer
- Status icons: DONE ✅, IN_PROGRESS 🟡, FAILED 🔴, SKIPPED ⚫
API
Initialisation
logger = Logger(
name, # shown in dashboard
host="localhost", # dashboard bind host
port=5678, # auto-increments if port is taken
auto_open=True, # open browser automatically
dashboard=True, # set False for production/CI
checkpoint_path=".pulselog/checkpoints.db",
level="DEBUG", # minimum capture level
worker_interval=0.05, # drain interval in seconds
)
Logging
logger.debug(msg, **extra)
logger.info(msg, **extra)
logger.warning(msg, **extra)
logger.error(msg, **extra)
logger.critical(msg, **extra)
# Extra kwargs appear as metadata in the dashboard
logger.info("request handled", user_id=42, latency_ms=12)
Checkpoints
logger.save(
name, # checkpoint identifier
data, # any JSON-serialisable dict
status="DONE", # "DONE"|"IN_PROGRESS"|"FAILED"|"SKIPPED"
note="", # human-readable description
progress=None # 0–100, shown as progress bar
)
result = logger.load("epoch-5") # → dict | None (never raises)
names = logger.checkpoints() # → list[str]
logger.delete_checkpoint("epoch-3")
Utilities
logger.tag("phase-2") # group following logs under a label
logger.divider("epoch boundary") # insert visual divider in dashboard
logger.flush() # force-drain queue (call before exit)
logger.shutdown() # graceful teardown
stats = logger.stats() # operational metrics dict
Stats
{
"records_logged": int,
"records_dropped": int, # due to queue overflow
"queue_size": int,
"checkpoints_saved": int,
"dashboard_clients": int,
"uptime_seconds": float,
}
Configuration
Priority order (highest → lowest): Logger() kwargs > env vars > pulselog.toml > defaults
Environment variables
PULSELOG_DASHBOARD=false
PULSELOG_HOST=0.0.0.0
PULSELOG_PORT=8080
PULSELOG_AUTO_OPEN=false
PULSELOG_CHECKPOINT_PATH=/data/checkpoints.db
PULSELOG_LEVEL=INFO
PULSELOG_WORKER_INTERVAL=0.01
pulselog.toml (place in CWD)
[pulselog]
host = "0.0.0.0"
port = 8080
auto_open = false
level = "INFO"
stdlib logging integration
import logging
from pulselog.handler import PulseHandler
logging.getLogger().addHandler(PulseHandler("my-app"))
logging.info("this appears in the pulselog dashboard")
Production usage
# In production: disable dashboard, keep checkpoints
logger = Logger("prod", dashboard=False, checkpoint_path="/data/checkpoints.db")
With dashboard=False:
- No threads are started
- No port is bound
- No browser is opened
- Checkpoint reads/writes still work
- Log calls return in ~100ns (level check only)
Performance
| Operation | Throughput |
|---|---|
logger.info() call |
~2µs |
| 1 million log calls | <2s |
Queue put() |
O(1), <1µs |
| Dashboard at 100k logs | No lag (virtual list) |
Design
logger.info() ← O(1), non-blocking
│
▼
LogQueue ← deque(maxlen=10_000), thread-safe
│
▼ every 50ms
BackgroundWorker ← daemon thread
│
├─▶ DashboardServer.broadcast()
│ └─▶ WebSocket clients (all connected browsers)
│
└─▶ (additional handlers)
The queue uses collections.deque(maxlen=N) — when full, the oldest record is silently dropped (never blocks). Dropped records are counted in logger.stats().
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pulselog-0.1.0.tar.gz.
File metadata
- Download URL: pulselog-0.1.0.tar.gz
- Upload date:
- Size: 31.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
371364600ec7f6bb4192e3e7d22efefc975af9247227b625dd5102e2a36a0dc0
|
|
| MD5 |
c85cc525ac27d0fcc18be199a1bd10dc
|
|
| BLAKE2b-256 |
bb7f01a514d059fd1ac864f611d7f79cc099c594dde137c91af7dafbe66d9c60
|
File details
Details for the file pulselog-0.1.0-py3-none-any.whl.
File metadata
- Download URL: pulselog-0.1.0-py3-none-any.whl
- Upload date:
- Size: 25.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6595545867355ec97b40ffb86d6ba6a88906beb17c160eec0224a69e6fc77eb4
|
|
| MD5 |
e4429d5bceaaf8f4e64f9e8347310626
|
|
| BLAKE2b-256 |
4e1fb83f720846bd2fac742e7c7e6b860b4ab470438e29382f41ec00e1456235
|