Remote Executable eXecution — inference with remotely-stored model weights
Project description
Rex Framework Package Overview
Rex Framework enables inference with remotely stored model weights without downloading full model checkpoints to local storage. Only the chunks needed for a given inference pass are fetched; the full model never resides in local memory or on disk.
Package intent:
- Primary: enable end users to run Rex for conversion, serving, and inference workflows.
- Secondary: support validation-oriented usage in CI and application test environments.
This package is intended for:
- Cloud-first inference where model chunks are fetched on demand.
- Memory-bounded environments where full checkpoint residency is undesirable.
- Notebook workflows, including Kaggle and Google Colab.
What You Get In This Package
- Python API for loading Rex manifests and running inference.
- CLI tools for conversion, validation, inspection, serving, benchmarking, and demo runs.
- Optional extras for PyTorch, cloud storage integrations, and benchmark tooling.
- The package is designed for runtime use; the repository's full test suite is not shipped in the PyPI artifacts.
Install
Minimal package (no PyTorch, useful for manifest validation and storage testing):
pip install rex-framework
Recommended for real inference workloads:
pip install "rex-framework[pytorch]"
With all optional features (cloud storage backends, benchmarking tools):
pip install "rex-framework[all]"
Available extras:
| Extra | What it adds |
|---|---|
pytorch |
torch>=2.0.0 for inference |
google-drive |
Google Drive storage backend |
onedrive |
OneDrive storage backend |
bench |
Benchmarking and profiling tools |
all |
All of the above |
Python compatibility notes:
numpyis auto-installed with the package (no separate install needed).- The
pytorchextra supports Python3.10to3.13. - PyTorch wheels are not published on PyPI for Python
3.14. - On some platforms (for example, macOS x86_64), Python
3.13may still lack compatibletorchwheels. Use Python3.11in that case.
Verify your install:
python -c "import torch, rex; print(torch.__version__); print(rex.__version__)"
How Rex Finds Your Model: Manifest and Chunk Paths
Rex does not load a model from a single checkpoint file. Instead it reads a manifest (a JSON file describing chunk locations, hashes, and metadata) and fetches individual chunks on demand from a base URL. Understanding how to point Rex at your model is essential for it to work.
What a Rex manifest is
A manifest is a JSON file (manifest.json) generated by rex-convert. It contains:
- Model metadata (architecture, dtype, total size).
- A list of chunks: each chunk has a relative file path, byte offset, size, and SHA-256 hash.
- The expected base URL where chunks are hosted.
Chunks are served separately (e.g., as files in a directory) and fetched via HTTP Range requests. You do not need to host a special server — any server that supports Range headers works (nginx, S3, Google Drive direct links, OneDrive, or rex-serve).
Step 1 — Convert your model to Rex format
rex-convert /path/to/model.pt \
--output ./rex_output \
--framework pytorch \
--model-id my-model
This produces:
rex_output/
manifest.json ← the manifest you will point load_model at
weights/
chunk_000.bin
chunk_001.bin
...
Step 2 — Host the chunk files
Option A — local HTTP server (for testing):
rex-serve --dir ./rex_output/weights --port 8080
Chunks are now reachable at http://localhost:8080/chunk_000.bin, etc.
Option B — any static HTTP host:
Upload rex_output/weights/ to any static host (nginx, S3, Cloudflare R2, GitHub Releases, Google Drive folder with public sharing). Note the base URL.
Step 3 — Point load_model at the manifest
from rex.api.config import RexConfig
from rex.api.load import load_model
config = RexConfig()
config.storage.base_url = "http://localhost:8080" # ← where chunks are hosted
runtime = load_model("./rex_output/manifest.json", config=config)
base_url tells Rex how to resolve relative chunk paths from the manifest. Every chunk path in manifest.json is appended to base_url when fetching.
Remote manifest (manifest itself is also hosted):
config.storage.base_url = "https://my-host.example.com/weights"
runtime = load_model("https://my-host.example.com/weights/manifest.json", config=config)
Environment variable alternative:
export REX_STORAGE_URL=https://my-host.example.com/weights
python your_script.py
Step 4 — Run inference
import numpy as np
from rex.api.generate import run_inference_sync
input_data = np.random.randn(1, 768).astype(np.float32)
output, metrics = run_inference_sync("./rex_output/manifest.json", input_data)
print(f"Inference time: {metrics.total_time_ms:.1f} ms")
Storage Backends
Rex supports multiple storage backends. Set config.storage.base_url to the appropriate URL scheme:
| Backend | URL format | Extra required |
|---|---|---|
Local HTTP / rex-serve |
http://localhost:8080 |
none |
| Remote HTTP/HTTPS | https://example.com/weights |
none |
| Google Drive | gdrive://folder-id |
google-drive |
| OneDrive | onedrive://drive-id/path |
onedrive |
| iCloud | icloud://path/to/weights |
none |
| Local filesystem | file:///abs/path/to/weights |
none |
Authenticated endpoints (e.g., private S3 or token-gated APIs):
config.storage.auth_token = "Bearer YOUR_TOKEN"
Or via environment variable:
export REX_AUTH_TOKEN=Bearer YOUR_TOKEN
Notebook Usage — Kaggle
Kaggle notebooks run on isolated kernels with internet access. The recommended pattern is to convert your model beforehand, host the chunks somewhere reachable (HTTPS URL, Google Drive public folder, or a Kaggle Dataset), then install Rex and load from that URL.
Install in a Kaggle cell
# Cell 1 — install
!pip install "rex-framework[pytorch]" -q
import rex, torch
print(rex.__version__, torch.__version__)
Load from an HTTPS host
# Cell 2 — configure and load
from rex.api.config import RexConfig
from rex.api.load import load_model
MANIFEST_URL = "https://your-static-host.com/rex_output/manifest.json"
CHUNKS_BASE_URL = "https://your-static-host.com/rex_output/weights"
config = RexConfig()
config.storage.base_url = CHUNKS_BASE_URL
config.cache.max_memory_cache_bytes = 512 * 1024 * 1024 # 512 MB limit
runtime = load_model(MANIFEST_URL, config=config)
Load from a Kaggle Dataset
Upload your rex_output/ directory as a Kaggle Dataset. Kaggle mounts datasets at /kaggle/input/<dataset-name>/.
# Cell 2 — load from Kaggle Dataset mount
from rex.api.config import RexConfig
from rex.api.load import load_model
MANIFEST_PATH = "/kaggle/input/my-rex-model/manifest.json"
CHUNKS_BASE_URL = "file:///kaggle/input/my-rex-model/weights"
config = RexConfig()
config.storage.base_url = CHUNKS_BASE_URL
runtime = load_model(MANIFEST_PATH, config=config)
Add Kaggle Secrets for authenticated endpoints
from kaggle_secrets import UserSecretsClient
secrets = UserSecretsClient()
token = secrets.get_secret("REX_AUTH_TOKEN")
config.storage.auth_token = f"Bearer {token}"
Run inference on Kaggle
# Cell 3 — inference
import numpy as np
from rex.api.generate import run_inference_sync
input_data = np.random.randn(1, 768).astype(np.float32)
output, metrics = run_inference_sync(MANIFEST_PATH, input_data)
print(f"Output shape: {output.shape}")
print(f"Inference time: {metrics.total_time_ms:.1f} ms")
Notebook Usage — Google Colab
Google Colab provides a transient VM with internet access. The same manifest/chunk remote loading pattern applies. Colab T4 or A100 GPUs can be used if your Rex model targets CUDA.
Install in Colab
# Cell 1 — install
!pip install "rex-framework[pytorch]" -q
import rex, torch
print(rex.__version__, torch.__version__)
Load from an HTTPS host
# Cell 2 — configure and load
from rex.api.config import RexConfig
from rex.api.load import load_model
MANIFEST_URL = "https://your-static-host.com/rex_output/manifest.json"
CHUNKS_BASE_URL = "https://your-static-host.com/rex_output/weights"
config = RexConfig()
config.storage.base_url = CHUNKS_BASE_URL
config.cache.max_memory_cache_bytes = 1 * 1024 * 1024 * 1024 # 1 GB (Colab has more RAM)
config.scheduler.enable_prefetch = True
config.scheduler.prefetch_window = 4
runtime = load_model(MANIFEST_URL, config=config)
Load from Google Drive in Colab
If you uploaded your rex_output/ to your Google Drive, mount it and point Rex at the local path:
# Cell 2a — mount Google Drive
from google.colab import drive
drive.mount("/content/drive")
# Cell 2b — load from mounted Drive path
from rex.api.config import RexConfig
from rex.api.load import load_model
MANIFEST_PATH = "/content/drive/MyDrive/rex_output/manifest.json"
CHUNKS_BASE_URL = "file:///content/drive/MyDrive/rex_output/weights"
config = RexConfig()
config.storage.base_url = CHUNKS_BASE_URL
runtime = load_model(MANIFEST_PATH, config=config)
Use Colab Secrets for tokens
from google.colab import userdata
config.storage.auth_token = f"Bearer {userdata.get('REX_AUTH_TOKEN')}"
GPU inference in Colab
Rex will use the available CUDA device automatically when PyTorch detects a GPU. Confirm your runtime type is set to T4 GPU or A100 in Colab's Runtime menu.
import torch
print("CUDA available:", torch.cuda.is_available())
print("Device:", torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU")
Core Principle
Rex executes with bounded local residency by streaming model chunks from remote storage through HTTP range fetches and cache-aware scheduling. At no point does the full model need to exist locally.
Feature Controls (Quick Reference)
Control Rex behaviour through RexConfig:
from rex.api.config import RexConfig
config = RexConfig()
# How much local memory the cache can use
config.cache.max_memory_cache_bytes = 512 * 1024 * 1024
# Fraction of the full model allowed locally at any time (Rex invariant)
config.cache.max_local_fraction_of_model = 0.4
# Cache eviction policy: lru | lfu | weighted_utility
config.cache.policy = "weighted_utility"
# Prefetch ahead of current execution
config.scheduler.enable_prefetch = True
config.scheduler.prefetch_window = 4
# Execution planning mode: graph | sequential
config.scheduler.scheduler_mode = "graph"
# Storage concurrency
config.storage.max_concurrent_fetches = 4
config.storage.adaptive_concurrency = True
# Logging
config.observability.log_level = "INFO" # DEBUG | INFO | WARNING | ERROR
config.observability.log_format = "console" # console | json | quiet
For all available config fields and preset profiles (debug, throughput-oriented), see:
CLI Quick Reference
| Command | Purpose |
|---|---|
rex-convert |
Convert a PyTorch checkpoint to Rex format |
rex-serve |
Serve chunk files with HTTP Range support |
rex-validate |
Validate a manifest file |
rex-inspect |
Inspect a manifest (verbose chunk listing) |
rex-benchmark |
Run latency/throughput benchmark |
rex-run-demo |
End-to-end demo run |
Package Guide
For full CLI and API reference, preset configuration profiles, and environment variable documentation, see:
For repository development details and architecture notes, use the repository documentation instead of package docs.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file rex_framework-0.1.4.tar.gz.
File metadata
- Download URL: rex_framework-0.1.4.tar.gz
- Upload date:
- Size: 145.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
42feea62b3a3db7697220dd0b0b06f113b21ef03d0e3e8ca9cf18c1cc9c4528d
|
|
| MD5 |
508536c768e8584ae7ff8f4a68d99afe
|
|
| BLAKE2b-256 |
f8586975bc13e2da153d9834d566bef3cace6351e2cbf8df533d8e13daaf8db0
|
Provenance
The following attestation bundles were made for rex_framework-0.1.4.tar.gz:
Publisher:
pypi-publish.yml on rotsl/rex-framework
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
rex_framework-0.1.4.tar.gz -
Subject digest:
42feea62b3a3db7697220dd0b0b06f113b21ef03d0e3e8ca9cf18c1cc9c4528d - Sigstore transparency entry: 1317195438
- Sigstore integration time:
-
Permalink:
rotsl/rex-framework@58b9a1b99c559b06ea2de26be5af8a9a91f1bf68 -
Branch / Tag:
refs/tags/v0.1.4 - Owner: https://github.com/rotsl
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish.yml@58b9a1b99c559b06ea2de26be5af8a9a91f1bf68 -
Trigger Event:
release
-
Statement type:
File details
Details for the file rex_framework-0.1.4-py3-none-any.whl.
File metadata
- Download URL: rex_framework-0.1.4-py3-none-any.whl
- Upload date:
- Size: 178.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fedcf589bd5179a7040565c0d5f9b68daaa65881c66edc81dc8afe3c4cf7de66
|
|
| MD5 |
0f81b85107accaa98ce350b922373d54
|
|
| BLAKE2b-256 |
db49439d4e8fb6f898abdd1756ceaebda8dc265ae6825685e3e2081884b840cf
|
Provenance
The following attestation bundles were made for rex_framework-0.1.4-py3-none-any.whl:
Publisher:
pypi-publish.yml on rotsl/rex-framework
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
rex_framework-0.1.4-py3-none-any.whl -
Subject digest:
fedcf589bd5179a7040565c0d5f9b68daaa65881c66edc81dc8afe3c4cf7de66 - Sigstore transparency entry: 1317195477
- Sigstore integration time:
-
Permalink:
rotsl/rex-framework@58b9a1b99c559b06ea2de26be5af8a9a91f1bf68 -
Branch / Tag:
refs/tags/v0.1.4 - Owner: https://github.com/rotsl
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish.yml@58b9a1b99c559b06ea2de26be5af8a9a91f1bf68 -
Trigger Event:
release
-
Statement type: