GPU Texture Reservoir Computing: fragment-shader cellular automata with linear readouts for image classification and PDE solving.
Project description
pixelflow
GPU Texture Reservoir Computing. A minimal, honest library that treats image pixels on a GPU texture as a fixed random reservoir — evolving under local cellular-automaton rules expressed as fragment shaders — and trains a linear readout on top for image classification, PDE solving, and time-series tasks.
Status: v0.1.0 alpha. The core is functional and tested; performance work and extra backends are in progress. This library is research software — it does not claim to beat state-of-the-art CNNs, but it is a clean, reproducible implementation of a legitimate reservoir-computing idea, and it is honest about what works and what doesn't.
What it is
Reservoir computing separates a dynamical system (the "reservoir") from a simple trainable readout. Only the readout is trained; the reservoir is fixed. If the reservoir has rich enough internal dynamics, a linear readout can solve surprisingly hard tasks.
pixelflow uses a 2D GPU texture as the reservoir state. Each pixel is a unit;
neighbours interact under a local rule implemented as a fragment shader (or an
equivalent numpy kernel on CPU). After steps iterations, the final texture is
flattened into a feature vector, and an sklearn linear model (Ridge, Logistic)
is trained on those features.
Three built-in rules:
| Rule | Dynamics | Use case |
|---|---|---|
diffusion_reaction |
Gray-Scott reaction-diffusion | Rich pattern formation — image features |
life_like |
Continuous Conway-style majority rule | Binary-ish inputs, texture classification |
wave |
Discrete wave equation (amplitude + velocity) | Eikonal / path-planning tasks |
Install
pip install pixelflow # core + CPU backend
pip install pixelflow[gpu] # adds moderngl + glfw (OpenGL 3.3 GPU backend)
pip install pixelflow[cuda] # adds cupy-cuda12x (CUDA backend)
pip install pixelflow[datasets] # adds torchvision / pillow for dataset loaders
pip install pixelflow[all] # everything
Quickstart
import numpy as np
from pixelflow import Reservoir, ReservoirConfig, RidgeReadout
from pixelflow.tasks.synthetic import two_moons
X_train, y_train = two_moons(n=500, noise=0.2, seed=0)
X_test, y_test = two_moons(n=200, noise=0.2, seed=1)
cfg = ReservoirConfig(
width=32, height=32, channels=4,
steps=8, rule="diffusion_reaction",
input_encoding="project", seed=0,
)
res = Reservoir(cfg, backend="cpu") # or backend="moderngl"
F_train = res.transform(X_train) # (500, 32*32*4)
F_test = res.transform(X_test)
readout = RidgeReadout(alpha=1.0).fit(F_train, y_train)
print(f"accuracy: {readout.score(F_test, y_test):.3f}")
Run the bundled example:
python examples/quickstart.py
Backends
cpu— pure NumPy. Always available. Reference implementation.moderngl— headless OpenGL 3.3 core via moderngl. Requires a GPU with working OpenGL drivers. Install withpip install pixelflow[gpu].cuda— CuPy-backed CUDA implementation. Requires an NVIDIA GPU and matching CUDA toolkit (12.x or 13.x). Install withpip install pixelflow[cuda].
All three backends produce numerically equivalent outputs (CPU vs moderngl:
max abs diff < 1e-5; CPU vs CUDA: max abs diff < 1e-3), verified in
tests/test_moderngl_backend.py and tests/test_cuda_backend.py.
Benchmarks
All numbers below are measured (raw JSON under benchmarks/results/) on
Windows 10 + Python 3.13, RTX 3090 + Ryzen 5950X. Losses vs. baseline are
reported as-is, never hidden.
MNIST (60k train / 10k test)
| Rule | Grid | Steps | Backend | Transform time | Test acc. | Raw-pixel baseline |
|---|---|---|---|---|---|---|
wave |
32×32×4 | 4 | cpu |
24.15 s | 0.9281 | 0.9261 |
wave |
32×32×4 | 4 | cuda (batched) |
5.81 s (4.2× faster) | 0.9281 | 0.9261 |
diffusion_reaction |
32×32×4 | 8 | cpu |
72.43 s | 0.9213 | 0.9261 |
GPU speedup (batched CUDA v0.3, RTX 3090)
| Workload | CPU | moderngl | CUDA batched | Speedup vs CPU |
|---|---|---|---|---|
N=1000, 64×64, 32 steps, diffusion_reaction |
5.00 s | 25.95 s | 0.28 s | 17.95× |
N=2000, 32×32, 16 steps, wave |
2.13 s | 46.49 s | 0.17 s | 12.32× |
CIFAR-10 (10k grayscale subset)
| Rule | Grid | Steps | Test acc. | Raw-pixel baseline |
|---|---|---|---|---|
wave |
32×32×4 | 6 | 0.2430 | 0.2521 |
Honest negative result: on grayscale CIFAR-10 with a 10k subset, the reservoir slightly underperforms the raw-pixel baseline. RGB + colour-aware encoding is a follow-up experiment.
Eikonal equation (reference vs wave reservoir)
Mean relative arrival-time error of the wave reservoir against a
fast-marching reference (constant-speed field, variable source):
| Grid | Source | Speed field | Mean rel. err. |
|---|---|---|---|
| 32×32 | center | constant | 0.064 |
| 64×64 | center | constant | 0.066 |
| 64×64 | off-center | constant | 0.313 |
| 64×64 | center | radial | 0.047 |
The reservoir is a crude wave simulator, not an Eikonal solver — the 30% error near boundaries is expected and documented.
Why bother?
Reservoir computing has three practical strengths:
- Training is cheap — only the readout is trained (closed-form for Ridge).
- The reservoir is reusable — same features serve multiple downstream tasks.
- Non-standard hardware is plausible — any substrate with rich local dynamics (optical systems, memristor arrays, analog chips) can act as a reservoir. GPU textures are a software twin of that.
pixelflow is not trying to beat ResNet on ImageNet. It's a clean
testbed for reservoir-computing research that happens to run fast on commodity
GPUs, and a pedagogical artifact for the "rendering is thinking" intuition.
Project layout
pixelflow/
├── pixelflow/
│ ├── core/ # Reservoir, ReservoirConfig, CA rules, encoders
│ ├── backends/ # cpu (numpy), moderngl (GPU)
│ ├── readouts/ # Ridge, Logistic (sklearn wrappers)
│ └── tasks/ # MNIST, Eikonal, synthetic
├── tests/ # pytest suite (CPU-only tests always run)
├── benchmarks/ # honest measurements with JSON outputs
├── examples/ # runnable demos
├── paper/ # draft paper + figures
└── docs/ # API contract, design notes
Citation
If you use pixelflow in research, please cite:
@software{angulo_pixelflow_2026,
author = {Angulo de Lafuente, Francisco},
title = {pixelflow: GPU Texture Reservoir Computing},
year = {2026},
version = {0.1.0},
url = {https://github.com/Agnuxo1/pixelflow},
license = {Apache-2.0}
}
License
Apache-2.0. See LICENSE.
Author
Francisco Angulo de Lafuente GitHub: @Agnuxo1
Acknowledgements
pixelflow distills a single clean idea from several years of experiments
across the NEBULA / NeuroCHIMERA / RED_NEURONAL_ANALOGICA project family.
The core reservoir-computing concept builds on classical work by Jaeger (Echo
State Networks) and Maass (Liquid State Machines); the GPU cellular-automaton
substrate is inspired by physical reservoir computing literature (Tanaka et
al. 2019) and practical fragment-shader-as-compute patterns.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pixelflow_rc-0.3.0.tar.gz.
File metadata
- Download URL: pixelflow_rc-0.3.0.tar.gz
- Upload date:
- Size: 32.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
94049f3aae3770e9dda971665f80759813bba4aad311f86cd1ab747facf83557
|
|
| MD5 |
10d558718befdd7b1880317ecc98ee0e
|
|
| BLAKE2b-256 |
2919ce16d31153eeb9da313922881744edafb6a34ad70f22394240f3c04cd9c3
|
File details
Details for the file pixelflow_rc-0.3.0-py3-none-any.whl.
File metadata
- Download URL: pixelflow_rc-0.3.0-py3-none-any.whl
- Upload date:
- Size: 27.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a030465932541b30fa31a0ddd0360b865c17c34e2fdf06d7674751378dd6f664
|
|
| MD5 |
d96d3bc17f3497bbf5dd75d80cb6a87b
|
|
| BLAKE2b-256 |
b1f7d71f9ac73e6d6fdbd7686fad923fded88a1acc8024d55ccc3061a2625ca1
|