Real-time browser dashboard for Python logging — zero config, non-blocking
Project description
pulselog
Non-blocking Python logger with a live browser dashboard.
pip install pulselog
Every
log.info()call costs 2.7µs. Zero config. Browser opens automatically.
Benchmark
Tested on Python 3.12.9, Windows, dashboard disabled (dashboard=False).
| Scenario | Throughput | Notes |
|---|---|---|
| Single-thread burst | 262.8k / sec | p50=2.7µs · p99=7.9µs · p99.9=87.9µs |
| Sustained (5s) | 290.4k / sec | 1.45M records logged |
| Queue saturation | 318.1k / sec | 0 drops — worker drained fast enough |
| Realistic (info + save + warn + error) | 160.1k / sec | Mixed call types with kwargs |
| Multi-thread (8 threads) | 41.3k / sec | 0 drops — GIL is the ceiling, not PulseLog |
A typical ML training loop logs 10–100 records/sec.
PulseLog handles 413× that load before any issues.
v0.1.1 → v0.1.2 improvements
Single-thread 207k → 263k / sec +27%
p99 latency 11.9 → 7.9 µs −34%
Dropped records 16,011 → 0 FIXED
Quick start
from pulselog import Logger
log = Logger("my-app")
log.info("training started", epoch=1)
log.warning("learning rate too high", lr=0.1)
log.save("epoch-1", {"acc": 0.91, "loss": 0.23}, status="DONE", progress=33)
log.shutdown()
A browser tab opens at http://localhost:5678 and streams every log in real time.
Why pulselog?
Standard logging blocks the calling thread on every write — waiting for a file, a socket, or a database. In tight loops (ML training, data pipelines, inference servers) this adds up fast.
pulselog never blocks. Every log call enqueues a record in O(1) and returns immediately. A daemon worker drains the queue every 10ms and pushes batches to the dashboard over WebSocket.
log.info() ← O(1), ~2.7µs, never blocks
│
▼
LogQueue ← deque(maxlen=100_000), thread-safe, lock-free wake
│
▼ every 10ms (adaptive — halves under load)
BackgroundWorker ← daemon thread
│
├──▶ DashboardServer.broadcast() ← WebSocket → live browser
│
└──▶ (custom handlers)
Dashboard
Single self-contained HTML file served over WebSocket — no build step, no CDN, no npm.
Logs tab
- Colour-coded by level — DEBUG gray · INFO blue · WARNING amber · ERROR/CRITICAL red
- Level filter + full-text search
- Virtual list rendering — 100k+ logs with zero browser lag
- Auto-scroll with manual scroll override
- Export session as JSON
Checkpoints tab
- Progress bar per checkpoint
- Overall progress = average across all checkpoints
- Expandable JSON data viewer
- Status badges — DONE ✅ · IN_PROGRESS 🟡 · FAILED 🔴 · SKIPPED ⚫
API
Logger
log = Logger(
name = "my-app",
host = "localhost",
port = 5678, # auto-increments if taken
auto_open = True, # open browser on start
dashboard = True, # False for CI / production
checkpoint_path = ".pulselog/checkpoints.db",
level = "DEBUG",
worker_interval = 0.01, # drain interval in seconds (default 10ms)
queue_size = 100_000, # max records before oldest evicted
overflow = "drop", # "drop" | "block" | "raise"
)
Logging
log.debug("msg", **extra)
log.info("msg", **extra)
log.warning("msg", **extra)
log.error("msg", **extra)
log.critical("msg", **extra)
# kwargs appear as structured metadata in the dashboard
log.info("request handled", user_id=42, latency_ms=12, status=200)
# exception() captures the current traceback automatically
try:
result = model.predict(x)
except Exception:
log.exception("prediction failed", input_shape=str(x.shape))
Checkpoints
log.save(
name = "epoch-5",
data = {"loss": 0.31, "acc": 0.94},
status = "DONE", # "DONE" | "IN_PROGRESS" | "FAILED" | "SKIPPED"
note = "best so far",
progress = 50 # 0–100, shown as progress bar in dashboard
)
result = log.load("epoch-5") # → dict | None (never raises)
names = log.checkpoints() # → list[str], most recent first
log.delete_checkpoint("epoch-3")
Context and grouping
# Tag groups subsequent logs under a label (per-thread — safe for concurrent use)
log.tag("training")
# Context manager — restores the previous tag on exit, even on exception
with log.context(tag="validation"):
log.info("val loss", loss=0.41)
# tag is restored here
# Visual divider in the dashboard stream
log.divider("epoch boundary")
Utilities
log.flush(timeout=2.0) # drain queue synchronously — returns False if timeout hit
log.shutdown() # graceful teardown (also called automatically on exit)
stats = log.stats()
# {
# "records_logged": int,
# "records_dropped": int,
# "drop_rate": float, # e.g. 0.04 = 4%
# "queue_size": int,
# "queue_capacity": int,
# "queue_fill_pct": float,
# "checkpoints_saved": int,
# "dashboard_clients": int,
# "uptime_seconds": float,
# }
stdlib logging integration
Drop-in bridge — all structured fields (lineno, filename, funcName, exc_info) are forwarded to the dashboard.
import logging
from pulselog.handler import PulseHandler
logging.getLogger().addHandler(PulseHandler("my-app"))
logging.info("this appears in the dashboard")
logging.error("with traceback", exc_info=True) # traceback preserved
Configuration
Priority (highest → lowest): Logger() kwargs → env vars → pulselog.toml → defaults
Environment variables
PULSELOG_DASHBOARD=false
PULSELOG_HOST=0.0.0.0
PULSELOG_PORT=8080
PULSELOG_AUTO_OPEN=false
PULSELOG_CHECKPOINT_PATH=/data/checkpoints.db
PULSELOG_LEVEL=INFO
PULSELOG_WORKER_INTERVAL=0.01
pulselog.toml (place in project root)
[pulselog]
host = "0.0.0.0"
port = 8080
auto_open = false
level = "INFO"
worker_interval = 0.01
Production usage
# Disable the dashboard, keep checkpoints, log to stderr on drop
log = Logger(
"prod",
dashboard = False,
checkpoint_path = "/data/checkpoints.db",
overflow = "drop", # never block — warn on stderr instead
)
With dashboard=False:
- No threads started, no port bound, no browser opened
- Checkpoint reads/writes still work
- Log calls return in ~100ns (level check only, no queue overhead)
- CI environments (
CI=true) disable the dashboard automatically
ML training example
from pulselog import Logger
log = Logger("resnet-training")
for epoch in range(1, 11):
loss, acc = train_epoch(epoch)
log.info(f"epoch {epoch}", loss=loss, acc=acc)
log.save(
f"epoch-{epoch}",
{"loss": loss, "acc": acc},
status = "DONE",
progress = epoch * 10,
)
if loss > prev_loss * 1.5:
log.warning("loss spike detected", epoch=epoch, loss=loss)
log.shutdown()
Design notes
Queue — collections.deque(maxlen=100_000), guarded by a single threading.Lock. The _event.set() wake signal fires only on empty→non-empty transitions, not on every put(), reducing lock acquisitions under burst load.
Worker — wakes immediately on new records via threading.Event, falls back to polling every 10ms. Adaptive: halves the interval when queue exceeds 50% capacity, restores it when calm.
Drop policy — when the queue is full, the oldest record is evicted and a stderr warning is emitted every 1,000 drops. Configure overflow="block" to pause the caller instead, or overflow="raise" to surface the error explicitly.
Shutdown — atexit and SIGTERM both call shutdown() once (guarded against double-invocation). flush() accepts a configurable timeout and returns False if the queue wasn't fully drained in time.
Thread safety — tag() and context() use threading.local() so each thread maintains its own tag state independently.
Requirements
- Python ≥ 3.8
websockets ≥ 11.0(only needed withdashboard=True)
pip install pulselog # includes websockets
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pulselog-0.1.3.tar.gz.
File metadata
- Download URL: pulselog-0.1.3.tar.gz
- Upload date:
- Size: 37.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d6784280e63ed73a00f13f2c88c2566a03e2d55b6fd467a89d9cc8327ec649d8
|
|
| MD5 |
28e1acf2fabeba14f86ed907100b24ac
|
|
| BLAKE2b-256 |
0932cc0b20fa33e04a130af5aa383a7c632e598497ea9953b83fbb4e9c49e304
|
File details
Details for the file pulselog-0.1.3-py3-none-any.whl.
File metadata
- Download URL: pulselog-0.1.3-py3-none-any.whl
- Upload date:
- Size: 30.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2d79078eb685381864fe5e1321093ecc676abc185d2f79d8f5e0c130505f22d6
|
|
| MD5 |
c5fff7d3991ecb2ad827c6a51be399fd
|
|
| BLAKE2b-256 |
073a13ed16b0d762556dff59d7426f16d1f4487f24511a8f3ef886dd0111f9ac
|