Skip to main content

Machine capability detection and compute normalization

Project description

coren

Compute and resource normalization for decentralized memoization.

One answer

let v = cap.verdict(&cost);
// v.score > 0  =>  fetch from network  (saves v.score seconds)
// v.score < 0  =>  compute locally     (saves -v.score seconds)
// v.score = +inf  =>  must fetch  (insufficient RAM to compute)
// v.score = -inf  =>  must compute  (no network available)

Two layers

FnCost (deterministic, all machines agree): four integers describing what a function needs. Uses only integer arithmetic. Bitwise identical on every architecture, OS, and CPU. For memoization: published alongside cached results so any node can verify and decide independently.

Field Meaning
flops Total arithmetic operations (W)
mem_bytes Memory traffic, cold-cache model (Q)
peak_mem Peak RAM footprint (M)
result_bytes Output size, what gets cached/transferred (R)

MachCap (local, measured): what this machine can do. Measured via micro-benchmarks (FMA loop, STREAM triad, disk I/O) and OS queries (NIC link speed). Produces a Verdict.

Rust

use coren::{FnCost, MachCap};

// Deterministic. Same on every machine.
let cost = FnCost::sort(1_000_000, 64, 64_000_000);
assert_eq!(cost.flops, 200_000_000); // exactly, always

// Pipelines compose.
let pipe = FnCost::scan(64_000_000, 0)
    .then(FnCost::sort(1_000_000, 64, 0))
    .then(FnCost::hash(64_000_000));

// This machine decides.
let cap = MachCap::read(".");
let v = cap.verdict(&cost);

println!("{}", v);           // "compute (saves 0.712s)"
println!("{}", v.score);     // -0.712
println!("{}", v.bottleneck);// "memory"
println!("{}", v.t_compute); // 0.088
println!("{}", v.t_fetch);   // 0.800

FnCost constructors

FnCost::new(W, Q, M, R)               raw values
FnCost::scan(n_bytes, R)               linear scan
FnCost::sort(n, item_bytes, R)         merge sort
FnCost::hash(n_bytes)                  crypto hash (R=32)
FnCost::matmul(m, n, k, R)             dense GEMM
FnCost::etl(rows, row_bytes, fpr, R)   row processing
FnCost::copy(size)                     file copy (W=0)

Combinators

a.then(b)   sequential (W sums, M = max, R = last)
a + b        same as then
a.par(b)    parallel (W = max, M sums, R sums)
a.repeat(k) k times (M unchanged)

Python

from coren import FnCost, MachCap

cost = FnCost.sort(1_000_000, 64, 64_000_000)
cap = MachCap.read()
v = cap.verdict(cost)

if v.score > 0:
    fetch_from_cache()
else:
    compute_locally()

# Verdict is truthy when fetch is better:
if v:
    fetch_from_cache()

Use cases

Decentralized memoization. A function's FnCost is published with its cache entry. Every node computes the same FnCost for the same function and inputs, then independently decides compute vs fetch based on its own MachCap.

ETL buffer sizing. cap.etl_buffer_bytes(row_bytes) returns how many bytes to buffer in RAM before flushing to disk, based on available memory and device class.

Cache TTL. cap.suggest_ttl(&cost, base_ttl) scales TTL proportional to the compute/fetch ratio. Expensive-to-recompute results live longer.

Data placement. Compare v.t_compute, v.t_fetch, and v.t_store to decide memory vs disk vs network tier for a result.

Query planning. Compare FnCosts of different execution strategies (hash join vs sort-merge join) on the local machine's roofline.

Build parallelism. cap.etl_buffer_bytes and cap.compute_budget help size worker pools and batch sizes for the local machine.

CLI

$ coren
coren  [desktop]
  ...
  verdicts (what should this machine do?)
    task                           W         Q         R    score       neck action
    sort 1M x 64B             200.0M     1.2GB    10.0MB   -0.712   memory compute
    matmul 1k^3                 2.0G    22.9MB    10.0MB   -0.779  compute compute

$ coren --json

Dependencies

Rust: sysinfo, serde. Python: pyo3 via maturin.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

coren-0.1.0-cp313-cp313-macosx_11_0_arm64.whl (344.2 kB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

File details

Details for the file coren-0.1.0-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for coren-0.1.0-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 1c06ca2395419b13fc50512b2dfa504297f92ccc3c56acf59fad4d919f841867
MD5 9d6f38a590a32ee479d698de5b471ce5
BLAKE2b-256 665e0c83b42ac828f9d4da960bc8b127e344626cd4347e11a4d79b197ec26837

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page