Machine capability detection and compute normalization
Project description
coren
Compute and resource normalization for decentralized memoization.
One answer
let v = cap.verdict(&cost);
// v.score > 0 => fetch from network (saves v.score seconds)
// v.score < 0 => compute locally (saves -v.score seconds)
// v.score = +inf => must fetch (insufficient RAM to compute)
// v.score = -inf => must compute (no network available)
Two layers
FnCost (deterministic, all machines agree): four integers describing what a function needs. Uses only integer arithmetic. Bitwise identical on every architecture, OS, and CPU. For memoization: published alongside cached results so any node can verify and decide independently.
| Field | Meaning |
|---|---|
| flops | Total arithmetic operations (W) |
| mem_bytes | Memory traffic, cold-cache model (Q) |
| peak_mem | Peak RAM footprint (M) |
| result_bytes | Output size, what gets cached/transferred (R) |
MachCap (local, measured): what this machine can do. Measured via micro-benchmarks (FMA loop, STREAM triad, disk I/O) and OS queries (NIC link speed). Produces a Verdict.
Rust
use coren::{FnCost, MachCap};
// Deterministic. Same on every machine.
let cost = FnCost::sort(1_000_000, 64, 64_000_000);
assert_eq!(cost.flops, 200_000_000); // exactly, always
// Pipelines compose.
let pipe = FnCost::scan(64_000_000, 0)
.then(FnCost::sort(1_000_000, 64, 0))
.then(FnCost::hash(64_000_000));
// This machine decides.
let cap = MachCap::read(".");
let v = cap.verdict(&cost);
println!("{}", v); // "compute (saves 0.712s)"
println!("{}", v.score); // -0.712
println!("{}", v.bottleneck);// "memory"
println!("{}", v.t_compute); // 0.088
println!("{}", v.t_fetch); // 0.800
FnCost constructors
FnCost::new(W, Q, M, R) raw values
FnCost::scan(n_bytes, R) linear scan
FnCost::sort(n, item_bytes, R) merge sort
FnCost::hash(n_bytes) crypto hash (R=32)
FnCost::matmul(m, n, k, R) dense GEMM
FnCost::etl(rows, row_bytes, fpr, R) row processing
FnCost::copy(size) file copy (W=0)
Combinators
a.then(b) sequential (W sums, M = max, R = last)
a + b same as then
a.par(b) parallel (W = max, M sums, R sums)
a.repeat(k) k times (M unchanged)
Python
from coren import FnCost, MachCap
cost = FnCost.sort(1_000_000, 64, 64_000_000)
cap = MachCap.read()
v = cap.verdict(cost)
if v.score > 0:
fetch_from_cache()
else:
compute_locally()
# Verdict is truthy when fetch is better:
if v:
fetch_from_cache()
Use cases
Decentralized memoization. A function's FnCost is published with its cache entry. Every node computes the same FnCost for the same function and inputs, then independently decides compute vs fetch based on its own MachCap.
ETL buffer sizing. cap.etl_buffer_bytes(row_bytes) returns how many
bytes to buffer in RAM before flushing to disk, based on available memory
and device class.
Cache TTL. cap.suggest_ttl(&cost, base_ttl) scales TTL proportional
to the compute/fetch ratio. Expensive-to-recompute results live longer.
Data placement. Compare v.t_compute, v.t_fetch, and v.t_store
to decide memory vs disk vs network tier for a result.
Query planning. Compare FnCosts of different execution strategies (hash join vs sort-merge join) on the local machine's roofline.
Build parallelism. cap.etl_buffer_bytes and cap.compute_budget
help size worker pools and batch sizes for the local machine.
CLI
$ coren
coren [desktop]
...
verdicts (what should this machine do?)
task W Q R score neck action
sort 1M x 64B 200.0M 1.2GB 10.0MB -0.712 memory compute
matmul 1k^3 2.0G 22.9MB 10.0MB -0.779 compute compute
$ coren --json
Dependencies
Rust: sysinfo, serde. Python: pyo3 via maturin.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file coren-0.1.0-cp313-cp313-macosx_11_0_arm64.whl.
File metadata
- Download URL: coren-0.1.0-cp313-cp313-macosx_11_0_arm64.whl
- Upload date:
- Size: 344.2 kB
- Tags: CPython 3.13, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1c06ca2395419b13fc50512b2dfa504297f92ccc3c56acf59fad4d919f841867
|
|
| MD5 |
9d6f38a590a32ee479d698de5b471ce5
|
|
| BLAKE2b-256 |
665e0c83b42ac828f9d4da960bc8b127e344626cd4347e11a4d79b197ec26837
|