Skip to main content

End-to-end partial-weight transfer pipeline.

Project description

ModelPulse ๐Ÿš€

End-to-end partial-weight transfer pipeline for edge LLM inference.

ModelPulse enables a unique "Zero-Disk" inference strategy: Device A (Server) serves model shards over the network, while Device B (Client/Bridge) reconstructs the model entirely in RAM and runs inference via llama.cpp without ever writing the full GGUF to physical storage.

Data Flow Diagram

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                     Server (Device A)                       โ”‚
โ”‚                  FastAPI @ 0.0.0.0:8000                     โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚                                                             โ”‚
โ”‚  WebSocket /ws (Control Plane)   HTTP (Data Plane)          โ”‚
โ”‚  โ”œโ”€ MODEL_READY                  โ”œโ”€ GET /manifest           โ”‚
โ”‚  โ”œโ”€ PING/PONG                    โ”œโ”€ GET /shards/*           โ”‚
โ”‚  โ”œโ”€ METRICS                      โ””โ”€ POST /metrics           โ”‚
โ”‚  โ””โ”€ ACK/BYE                                                 โ”‚
โ”‚                                                             โ”‚
โ”‚  /models/upload (Multipart)                                 โ”‚
โ”‚  โ””โ”€ Accept manifest.json + *.shard files                    โ”‚
โ”‚                                                             โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ†‘                              โ†‘
         โ”‚                              โ”‚
         โ”‚ WS connect                   โ”‚ HTTP GET/POST
         โ”‚ + MODEL_READY signal         โ”‚ + shard stream
         โ”‚                              โ”‚
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                   Client (Device B)                         โ”‚
โ”‚                       Bridge CLI                            โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚                                                             โ”‚
โ”‚  1. Connect WebSocket โ†’ Send HELLO                          โ”‚
โ”‚  2. Receive MODEL_READY โ†’ Fetch manifest (HTTP)             โ”‚
โ”‚  3. Download shards (HTTP streaming)                        โ”‚
โ”‚  4. Assemble GGUF in /dev/shm                               โ”‚
โ”‚  5. Load with llama.cpp                                     โ”‚
โ”‚  6. Run inference                                           โ”‚
โ”‚  7. Send METRICS โ†’ Loop back to step 2                      โ”‚
โ”‚     (no restart, listen for next model)                     โ”‚ 
โ”‚                                                             โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

โœจ Key Features

  • ๐Ÿ›ก๏ธ Zero-Disk Strategy: Models are assembled in tmpfs (/dev/shm), ensuring no persistent GGUF footprint on the client's disk.
  • ๐Ÿ”„ Dynamic Model Swapping: Upload new models to the server at runtime; connected clients automatically unload, pull, and reload the new model without a restart.
  • โšก Delta Updates (New!): Update only the changed tensors in a model. The bridge patches its in-memory GGUF in real-time, downloading only a fraction of the full model size.
  • ๐Ÿ“Š Real-time Telemetry: Detailed inference metrics (TTFT, tok/s, RAM delta, CPU temp) are streamed back to the server for centralized monitoring.
  • ๐Ÿ› ๏ธ Integrated Benchmarking: Built-in suite to stress-test edge devices and validate performance across different quantization levels.
  • ๐ŸŒ Network Agnostic: Works seamlessly over local networks, Tailscale, or any HTTP/WS-capable connection.

๐Ÿ“ฆ Installation

Install ModelPulse from PyPI:

pip install modelpulse

Alternatively, install directly from the repository for the latest dev features:

pip install git+https://github.com/MdSufiyan005/ModelPulse.git

Note: Ensure you have llama-cpp-python dependencies installed on your system (e.g., build-essential, python3-dev).


๐Ÿ”„ Workflow

1. Prepare Shards

Convert a monolithic .gguf file into a shard directory:

modelpulse server convert my_model.gguf ./my-shards/

2. Start the Server

Start the control plane on Device A. It will default to using ./models-storage for storing model data.

modelpulse server run --host 0.0.0.0 --port 8000

3. Run the Bridge

Connect your edge device to the server. It will wait for a model to be assigned.

modelpulse bridge run http://<server-ip>:8000

4. Dynamic Upload

Upload your prepared shards to the server. All connected bridges will instantly receive the update.

# Full Baseline Upload
modelpulse server upload "qwen-3.5-2b" "./my-shards/"

# Delta Update (Auto-Diff)
modelpulse server upload "qwen-3.5-2b-v2" "./new-shards/" --base "qwen-3.5-2b" --base-dir "./old-shards/"

๐Ÿ“‹ Command Reference

modelpulse server run

Start the FastAPI control plane.

Option Default Description
--shard-dir, -d ./models-storage Root directory for model storage
--host 127.0.0.1 Bind address
--port 8000 Listening port
--metrics-log metrics.jsonl File to append received telemetry

modelpulse server upload

Upload models or delta patches to the control plane.

Option Default Description
model_id (Required) Unique slug for the new model
paths (Required) Shard directory or list of .shard files
--base None Base model ID for delta update
--base-dir None Local directory of base model for auto-diff
--server http://127.0.0.1:8000 Target server URL

modelpulse server convert

Convert a monolithic GGUF file into tensor-level shards.

Argument Description
gguf_path Path to the monolithic .gguf file
output_dir Directory to store the generated shards

modelpulse bridge run

Connect to a server and enter the inference loop.

Option Default Description
host (Required) Server URL (e.g., http://100.64.0.5:8000)
--prompt, -p (Interactive) Run a single prompt and exit
--benchmark, -b false Run the standard benchmark suite
--max-tokens, -m 256 Token generation limit
--temperature 0.7 Sampling temperature
--n-ctx 2048 Context window size

๐Ÿ“ Project Layout

modelpulse/
โ”œโ”€โ”€ modelpulse/             # Core package
โ”‚   โ”œโ”€โ”€ server/             
โ”‚   โ”‚   โ””โ”€โ”€ server.py       # FastAPI + WebSocket control plane
โ”‚   โ”œโ”€โ”€ client/             # Bridge (Device B) logic
โ”‚   โ”‚   โ”œโ”€โ”€ cli.py          # Claude-inspired terminal UI
โ”‚   โ”‚   โ”œโ”€โ”€ bridge.py       # RAM GGUF assembly & llama.cpp loading
โ”‚   โ”‚   โ”œโ”€โ”€ shard_client.py # Async HTTP downloader for shards
โ”‚   โ”‚   โ””โ”€โ”€ benchmarks.py   # Built-in performance testing suite
โ”‚   โ”œโ”€โ”€ shared/             # Cross-component protocol definitions
โ”‚   โ”‚   โ”œโ”€โ”€ ws_protocol.py  # WebSocket message schemas
โ”‚   โ”‚   โ””โ”€โ”€ models.py       # ShardManifest & InferenceMetrics models
โ”‚   โ””โ”€โ”€ main.py             # Unified CLI entry point
โ”œโ”€โ”€ tools/                  # Model preparation utilities
โ”‚   โ”œโ”€โ”€ gguf_to_shards.py   # GGUF โ†’ Shard converter (tensor-level)
โ”‚   โ””โ”€โ”€ gguf_parser.py      # Low-level GGUF format metadata reader
โ”œโ”€โ”€ TEST_WORKFLOW.md        # Step-by-step end-to-end testing guide
โ”œโ”€โ”€ pyproject.toml          # Project metadata & dependencies
โ””โ”€โ”€ metrics.jsonl           # Appends log for inference telemetry

๐Ÿ’พ The Zero-Disk Strategy

ModelPulse leverages the Linux tmpfs (RAM-backed filesystem) to satisfy llama.cpp's requirement for a file path while keeping the actual data off physical storage:

  1. Pull: Bridge fetches manifest.json.
  2. Stream: Bridge pulls .shard files (tensor by tensor) into memory.
  3. Assemble: Bridge calculates GGUF layout and writes bytes to /dev/shm/sb_<pid>.gguf.
  4. Load: llama-cpp-python loads the model via mmap from the RAM-backed file.
  5. Clean: Once the model is unloaded, the virtual file is unlinked and memory is reclaimed.

๐Ÿ“ก Networking (Tailscale)

For easy cross-device connectivity without port forwarding, Tailscale is highly recommended:

# Get IP on Server
tailscale ip  # e.g., 100.66.170.100

# Connect Bridge
modelpulse bridge run http://100.66.170.100:8000

Built with โค๏ธ for Edge AI and Decentralized Inference.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

modelpulse-0.3.1.tar.gz (46.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

modelpulse-0.3.1-py3-none-any.whl (44.8 kB view details)

Uploaded Python 3

File details

Details for the file modelpulse-0.3.1.tar.gz.

File metadata

  • Download URL: modelpulse-0.3.1.tar.gz
  • Upload date:
  • Size: 46.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for modelpulse-0.3.1.tar.gz
Algorithm Hash digest
SHA256 2dbb1bb1be5bccbcc493145bf50cd70f62c1794fe81197c6b07b4e3f6d7a88b1
MD5 d60baecd4fc0585dccfc69a5d941272b
BLAKE2b-256 738a043357efd2f76e3c412f5e68465310dc05a363a1af6e29121d0fd1f4d96d

See more details on using hashes here.

File details

Details for the file modelpulse-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: modelpulse-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 44.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for modelpulse-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 59c512c02815f0960f7f46f6bd06e5aa1a727d02411ccae978e48449a5118bd6
MD5 cba8edc2825349b203107926d18278fd
BLAKE2b-256 24d28f769e3069ba33904ce5ff996bd81ac5a275b31177e142578dbaf7f1eac9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page