Skip to main content

A viser extension with out-of-the-box support for the time dimension

Project description

viser4d

viser4d is a small wrapper around viser that adds a time dimension. It records scene operations across timesteps, supports timeline-synced audio playback, and plays them back client-locally in each browser tab.

Quickstart

pip install viser4d
import numpy as np
import viser4d

server = viser4d.Viser4dServer(num_steps=10, fps=10)

with server.at(0) as timeline:
    points = np.random.uniform(-1.0, 1.0, size=(200, 3))
    point_cloud = timeline.scene.add_point_cloud(
        "/points",
        points=points,
        colors=(255, 200, 0),
    )

for i in range(1, 10):
    with server.at(i):
        points = np.random.uniform(-1.0, 1.0, size=(200, 3))
        point_cloud.points = points

server.sleep_forever()

Open the viewer in your browser and use the built-in Playback controls to play, scrub, and step through the client-local timeline.

Timeline model

  • The built-in browser controls (Play, Pause, Prev, Next, and the Timestep slider) are client-local. Different tabs can be on different timesteps at the same time, and those controls are handled directly in the browser rather than round-tripping through Python.
  • The fps= passed to Viser4dServer(...) defines the timeline step rate used for audio timing and .viser export. Client playback speed is expressed as a speed factor on top of that base rate.
  • server.on_timestep_change(...) fires whenever any client commits a new discrete timestep and passes (client, timestep). With multiple clients, it is an aggregate event stream and may repeat timesteps or arrive out of order.
  • server.on_playback_change(...) fires whenever a client reports that its built-in transport changed between playing and paused, and passes (client, is_playing).
  • server.play() and server.pause() broadcast playback commands to the clients that are connected right now. They do not create a shared server clock.
  • loop= on Viser4dServer(...) and server.set_loop(...) control whether playback wraps at the end for connected and future clients.
  • playback_speed= on Viser4dServer(...) and server.set_playback_speed(...) control the default playback speed for connected and future clients.

Streaming ingest

If data arrives incrementally, initialize components at t=0 and then record updates as each new frame arrives:

import numpy as np
import viser4d

num_steps = 180
server = viser4d.Viser4dServer(num_steps=num_steps, fps=30)

def get_next_points() -> np.ndarray:
    # Replace with your real sensor/network/pipeline frame source.
    return np.random.normal(size=(400, 3)).astype(np.float32)

with server.at(0) as timeline:
    point_cloud = timeline.scene.add_point_cloud(
        "/stream/points",
        points=get_next_points(),
    )

for t in range(1, num_steps):
    points = get_next_points()
    with server.at(t):
        point_cloud.points = points

server.sleep_forever()

Timestep callbacks

If you have your own visualization logic and just want to use viser4d's timeline infrastructure, you can register a callback that fires whenever any connected client commits a new discrete timestep:

import viser
import viser4d

server = viser4d.Viser4dServer(num_steps=100)

def on_timestep(client: viser.ClientHandle, t: int) -> None:
    update_video_frame(client.scene, t)
    update_client_overlays(client.scene, t)

server.on_timestep_change(on_timestep)
server.sleep_forever()

With multiple clients, this callback is aggregate: if two tabs both visit timestep 3, it will fire twice, once for each client.

Playback state callbacks

If you need to know when a client starts or stops playback, use the playback callback and the per-client playback handles. Use server.get_client_playback(client_id) for direct lookup, or server.get_client_playbacks() to snapshot all connected clients:

import viser
import viser4d

server = viser4d.Viser4dServer(num_steps=100)

def on_playback_change(client: viser.ClientHandle, is_playing: bool) -> None:
    print(client.client_id, is_playing)
    playback = server.get_client_playback(client.client_id)
    if playback is not None:
        print(playback.current_timestep, playback.speed)

server.on_playback_change(on_playback_change)

# Snapshot of connected playback handles keyed by client id.
for client_id, playback in server.get_client_playbacks().items():
    print(
        client_id,
        playback.is_playing,
        playback.current_timestep,
        playback.speed,
    )

ClientPlaybackHandle.is_playing reflects the last play/pause state reported by that browser tab. server.play() and server.pause() send commands, but the handle state only changes once the client reports the result back. ClientPlaybackHandle.speed is the tab's current playback-speed factor. If you need the effective playback FPS, compute server.fps * playback.speed.

Each handle also exposes per-client control methods that mirror the server-wide commands but apply only to that one tab:

playback.seek(t)            # jump to a specific timestep
playback.play()             # start playback
playback.pause()            # pause
playback.set_speed(2.0)     # update speed without starting playback
playback.refresh()          # redraw current timestep from recorded state

Server playback commands

server.play() starts each connected client from that client's own current timestep, using its current playback speed. server.pause() pauses each connected client wherever it currently is. server.set_playback_speed(...) updates the default playback speed and pushes it to connected clients without starting playback. You can also set the initial default with Viser4dServer(..., playback_speed=2.0). server.set_loop(...) updates the loop setting for connected clients and for clients that connect later. You can also set the initial default with Viser4dServer(..., loop=True). server.refresh() redraws the current timestep on all connected clients, which is useful after updating recorded scene data while paused. server.set_steps(n) resizes the timeline after initialization. Growing keeps existing recorded data and exposes new empty timesteps; shrinking discards any recorded steps at or beyond n and clamps connected clients into range. server.clear() resets the recorded timeline, resets connected clients back to step 0 at speed 1.0, and clears shared scene nodes added through server.scene. None of these change the base timeline step rate used for audio timing or export; set that with fps= when you construct the server. New clients always start paused at timestep 0, inheriting the current server playback speed and loop setting.

Export recordings

To export a .viser recording, use server.serialize():

import viser4d

server = viser4d.Viser4dServer(num_steps=100)
# ... record timeline data ...
blob = server.serialize(start_timestep=0, end_timestep=None)

To export a standalone HTML viewer, use server.as_html():

import viser4d

server = viser4d.Viser4dServer(num_steps=100)
# ... record timeline data ...
html = server.as_html(start_timestep=0, end_timestep=None)

Streaming audio append

For audio that arrives incrementally, create a track once inside at(t) and append chunks through the returned handle:

import numpy as np
import viser4d

server = viser4d.Viser4dServer(num_steps=300, fps=30)

with server.at(0) as timeline:
    audio = timeline.audio.add_track(
        "/stream/audio",
        data=np.zeros(1600, dtype=np.float32),
        sample_rate=16000,
    )

for _ in range(120):
    chunk = np.random.uniform(-0.05, 0.05, size=(1600,)).astype(np.float32)
    audio.append(chunk)

AudioHandle.append(...) extends the same track contiguously (same channel count). AudioHandle.volume is a readable and writable float in [0, 1] that controls playback gain, useful for attaching a GUI slider.

How it works

Server-side recording. server.scene is always viser's live/static scene API. Timeline writes go through the explicit object returned by server.at(t):

with server.at(t) as timeline:         Outside at(t):
    timeline.scene.add_frame(...)      server.scene.add_frame(...)
    timeline.audio.add_track(...)             │
           │                                  ▼
           ▼                               updates live viser scene
      records to timeline

Recorded messages are grouped into fixed-size blocks in the timeline store.

Client-side playback. When a browser tab connects, the server injects a JavaScript runtime (TimelineRuntime) alongside the normal viser viewer. Each tab manages its own independent transport: play, pause, seek, and speed are all client-local. The runtime fetches timeline blocks from the server on demand and keeps a small three-block circular window around the current playback position. At each timestep the runtime replays the recorded viser messages for that step (and any prior steps in the same block that haven't been applied yet), keeping the rendered scene in sync with the timeline position.

  • Inside at(t): Use timeline.scene and timeline.audio from with server.at(t) as timeline:.
  • Outside at(t): server.scene remains viser's live/static scene API.
  • Timeline handles after recording: mutating a scene handle returned from timeline.scene.add_*() outside at(t) applies a persistent global override across playback and export. Creating new timeline nodes still requires at(t).
  • Client playback: Each browser tab owns its own transport and playback state.
  • Block streaming: Timeline data is fetched block-by-block as the client plays or scrubs.
  • Timestep callbacks: on_timestep_change(...) aggregates committed client steps and passes the source client.
  • Playback callbacks: on_playback_change(...) reports per-client play/pause transitions.
  • Audio: Add timeline-synced tracks with timeline.audio.add_track(...) inside at(t).

See examples/ for more.

Quality checks

uvx ruff format .
uvx ruff check .
uvx ty check
npm --prefix src/viser4d/client run typecheck
npm --prefix src/viser4d/client run build

Tests

npm --prefix src/viser4d/client run build
uv run --group dev pytest -q

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

viser4d-0.15.0.tar.gz (61.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

viser4d-0.15.0-py3-none-any.whl (64.2 kB view details)

Uploaded Python 3

File details

Details for the file viser4d-0.15.0.tar.gz.

File metadata

  • Download URL: viser4d-0.15.0.tar.gz
  • Upload date:
  • Size: 61.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.5 {"installer":{"name":"uv","version":"0.11.5","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for viser4d-0.15.0.tar.gz
Algorithm Hash digest
SHA256 2f57bc4007ed917b687e3180bfcba2211f7b4cdebb2fe829a4a3e88447ff1f80
MD5 2e792424af47e565f68393bcd1b2a806
BLAKE2b-256 0ea6a2a2c992f7ea36dc33f7f7366447306fe7983ac02fece819bfcf37b64871

See more details on using hashes here.

File details

Details for the file viser4d-0.15.0-py3-none-any.whl.

File metadata

  • Download URL: viser4d-0.15.0-py3-none-any.whl
  • Upload date:
  • Size: 64.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.5 {"installer":{"name":"uv","version":"0.11.5","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for viser4d-0.15.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8f094624bc5bb86278b9c304d96998c08b237a30f6eb2f71191a012ceaa4e65a
MD5 40a109845317401cf806643b726cf126
BLAKE2b-256 a1fb8ca51aa0183fbe2b1484a650425b121057eb0f6bcbe12d46785fe54b955a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page