Fast likelihood-free PET kinetic modelling in JAX
Project description
vPET-ABC
Fast, likelihood‑free PET kinetic modelling implemented in JAX
1 · What is this repository?
vpetabc is a pure‑Python re‑implementation of the vPET‑ABC framework [(Grazian et al., 2021)](https://ieeexplore.ieee.org/document/9875446/, peer-reviewed paper coming soon) for large‑scale dynamic PET kinetic modelling, written from the ground up in JAX.
Compared with the earlier CuPy version, the JAX rewrite
- removes CUDA‑specific boiler‑plate – the same code runs on CPU, multi‑GPU, or TPU via XLA;
- exposes a clean, PyTorch‑like API centred on an abstract
KineticModel; - relies on vectorised primitives (
vmap,lax.scan) so that even > 40 M‑voxel datasets fit into a single JIT‑compiled graph; - delivers further speed‑ups;
- depends only on
jax,pandas, andtqdm– no CuPy, no manual builds.
2 · Repository layout
.
├── data/
│ ├── sample_2TCM.csv
│ └── sample_lpntPET.csv
├── dist/ # wheels/sdists created by `python -m build`
├── example_usage.ipynb
├── pyproject.toml
├── README.md
└── src/
└── vpetabc/
├── __init__.py # package namespace
├── engine.py # ABC engine + helpers
├── models.py # TwoTissueModel, lpntPETModel, …
├── priors.py # prior samplers
└── utilities.py # I/O + posterior utilities
| Module | Description |
|---|---|
engine.py |
ABCRejection, the fully vectorised, JIT‑compiled rejection‑ABC driver |
models.py |
KineticModel base‑class + TwoTissueModel, lpntPETModel implementations |
priors.py |
Uniform × Bernoulli prior samplers (TwoTissuePrior, lpntPETPrior) |
utilities.py |
preprocess_table, get_conditional_posterior_mean, misc. helpers |
3 · Installation
TL;DR
pip install vpet-abc[cuda]
3.1 Stable release from PyPI
# create & activate a virtual environment
python -m venv .venv && source .venv/bin/activate
# CPU‑only:
pip install vpet-abc
# NVIDIA GPUs
pip install vpet-abc[cuda]
jax[cuda] wheels already bundle matching CUDA/cuDNN libraries; you only need a driver on Linux / Windows WSL. For TPU, Metal (macOS), or ROCm see the official
JAX installation guide.
3.2 Tested environments
| OS | Python | jax / jaxlib |
Accelerator |
|---|---|---|---|
| macOS 15.5 (arm64) | 3.11.12 | 0.6.1 [CPU] | Apple M2 |
| Rocky Linux 8.10 | 3.9.2 | 0.4.30 [cuda] | NVIDIA V100 |
4 · Quick start
See example_usage.ipynb for an executable walkthrough, or run:
import jax
import jax.numpy as jnp
import jax.random as jr
import pandas as pd
from vpetabc import *
df = pd.read_csv("data/sample_2TCM.csv", index_col=0)
Cp_fine, A, TACs, _ = preprocess_table(df)
lower_bounds = jnp.array([0, 0, 0, 0, 0])
upper_bounds = jnp.array([1, 1, 1, 1, 1])
engine = ABCRejection(
TwoTissueModel(),
prior_sampler = TwoTissuePrior,
lower_bounds = lower_bounds,
upper_bounds = upper_bounds,
num_sims = 200_000,
accept_frac = 0.005,
)
post = engine.run(jr.PRNGKey(0), TACs, Cp_fine, A, batch_size=50_000)
means, chosen = get_conditional_posterior_mean(post)
To run preprocess_table() your CSV must be organised by rows as follows:
| Row (0-based) | Purpose | Shape |
|---|---|---|
| 0 | Mid-frame time of each dynamic frame | (F,) |
| 1 | Frame length | (F,) |
| 2 | Input function ($C_p$) / reference TAC ($C_r$) | (F,) |
| 3 to 3 + V−1 | One row per voxel TAC | (V, F) |
F – number of frames, V – number of voxels
5 · Extending the framework
- Define your kinetic model
class MyModel(KineticModel):
@partial(jax.jit, static_argnums=(0,))
def simulate(self, θ, Cp, dt):
# return Ct(t) as a (T_fine,) array
...
- Write a prior
def MyPrior(key, n, lows, highs):
return jr.uniform(key, (n, P), lows, highs)
- Pass
MyModelandMyPriortoABCRejection.
Batching, GPU kernels, and distance evaluation are handled automatically.
6 · Benchmarks
TBD Current estimates believe inference on 4.4 million voxels for a simulation size of 10,000,000 takes no more than 2 hours on 4 A100 GPUs, and 11.8 hours on one V100 GPU.
7 · Citation
TBA soon.
8 · Known Issues
- Stale GPU allocations after an interrupted run: If a notebook-cell or script is killed part-way through execution, the CUDA context can remain resident, leaving most of the GPU memory “in use”. Subsequent calls will then fail with “CUDA out of memory” even though no computation is running. Work-around: restart the Python/Jupyter kernel (or the entire Python process). This releases the orphaned context and frees the GPU memory. A full system reboot is not required. This is a limitation of JAX + XLA.
- Doesn't Support JAX-Metal yet. Will look into this issue.
9 · Licence
vpetabc is released under the MIT Licence (see LICENCE).
The sample dataset is provided for non‑commercial research use only.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vpet_abc-0.1.3.tar.gz.
File metadata
- Download URL: vpet_abc-0.1.3.tar.gz
- Upload date:
- Size: 762.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
098ee1ef47d56ee2366b2453721ad5edba8e96e27ad00ce5e1a8e46d4a596aaf
|
|
| MD5 |
dc2f779009d6fa0ee99d204a201eea23
|
|
| BLAKE2b-256 |
533527128e4397a576eac412e5074d9650330009cdf419d9e997036b5b20e8e5
|
File details
Details for the file vpet_abc-0.1.3-py3-none-any.whl.
File metadata
- Download URL: vpet_abc-0.1.3-py3-none-any.whl
- Upload date:
- Size: 10.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
caa03e8188ef691f1dcfd8288d0566bebac9e267d0c0c40edb22e1b3e02570bc
|
|
| MD5 |
4b373b68eab328174cd177f3251d891f
|
|
| BLAKE2b-256 |
e51c48e07b92c6aa6ca73dd0008ab6e46e1e261bc9b6894868d108b8a0142760
|