Skip to main content

Multi-runtime GPU + remote inference as a supervised actor system on the atomr actor runtime.

Project description

atomr-infer

One supervised actor topology for every place a model can run. Local GPU runtimes (vLLM, TensorRT, ONNX Runtime, Candle, cudarc, mistral.rs) and managed APIs (OpenAI, Anthropic, Gemini, LiteLLM) sit under the same routing CRDT, the same supervision tree, the same backpressure story. A request doesn't know — and doesn't need to — whether it landed on an H100 two racks away or in another company's data center.

[dependencies]
inference = { version = "0.2", features = ["openai", "anthropic", "candle", "pipeline"] }
use inference::prelude::*;

// Same value object describes a vLLM-on-4×H100 replica or a Gemini Vertex
// deployment. The `runtime` field is the only thing that changes —
// and it's auto-inferred from the model name when omitted.
let dep = Deployment {
    name: "gpt-4o-mini".into(),
    model: "gpt-4o-mini".into(),
    runtime: None,
    runtime_config: None,
    gpus: None,
    replicas: 1,
    serving: Serving::default(),
    budget: None,
    idempotent: true,
};

Built on rakka for actor supervision, clustering, and CRDTs, and on rakka-accel for two-tier GPU supervision. Cost, latency, and reliability stop being three pipelines and become one.


Why

Production AI rarely runs only on owned hardware. Frontier models, burst capacity, and compliance edge cases all push work onto managed APIs. Bolting providers onto a separate retry / rate-limit / observability stack from your local GPU pool fragments the system — and the cracks are exactly where 3 a.m. pages come from.

You'd otherwise hand-roll atomr-infer gives you
One routing layer for local pools, another for the API SDK Single routing CRDT — gpt-4o and llama-3.1-70b resolve through the same path
Per-process token buckets that 429 on cluster scale-out RateLimiterActor over atomr_distributed_data::GCounter — one bucket, all nodes
Hand-written retry / breaker / backoff per provider CircuitBreakerActor + jittered retry + content-filter triage, one strategy
Sticky CUDA-context recovery glued to async tasks rakka_accel::error::device_supervisor_strategy() adopted unchanged
Cascade graphs duct-taped from threadpools and channels InferenceCascade / DynamicBatchingServer / ModelReplicaPool actors
Credential rotation that drops in-flight traffic RemoteSessionActor::rebuild drains old, routes new — zero dropped requests
A no-GPU egress server that still pulls cudarc transitively --features remote-onlycudarc, rakka-accel, candle not in the graph
Cost guardrails as Slack alerts after the bill arrives Budget { max_spend_per_hour_usd, on_exceeded: Reject } enforced at the actor

Every concern that's normally a separate library or a separate incident is folded into one supervised graph with typed messages.


30-second tour

# Stand up an OpenAI-compatible gateway over real (or mocked) providers.
cargo run -p inference-cli --features all-remote -- serve --config demo.toml

# End-to-end demo (happy path / 429 retry / circuit-open) without
# spending a cent — wiremock under the hood.
cargo run --bin remote_only_demo

# Pure-remote binary, zero GPU deps in the graph.
cargo build -p inference --no-default-features --features remote-only

Architecture

The full design lives in docs/rustakka-inference-architecture-v4.md (1,459 lines, RFC v4). Short version:

                      [HTTP clients]
                            │
                            ▼
                   ApiGatewayActor                   runtime-agnostic
                            │ spawns one per request (inference-runtime)
                            ▼
                    RequestActor
                            │   ask(routing target)
                            ▼
                  DpCoordinatorActor                cluster-singleton
                            │   tell(AddRequest)
                            ▼
            ┌───────────────┴───────────────┐
            ▼                               ▼
   EngineCoreActor (LOCAL)          RemoteEngineCoreActor (REMOTE)
   ┌──────────────────────┐         ┌────────────────────────────┐
   │ scheduler/batcher    │         │ request queue (priority)   │
   │ kv_cache_mgr (LLM)   │         │ rate-limit-aware dispatch  │
   │ ModelExecutorActor   │         │ ┌─────────────────────────┐│
   │   ├─ WorkerActor     │         │ │ WorkerPool              ││
   │   │   └─ ContextActor│         │ │  ├─ RemoteWorkerActor   ││
   │   │       ├─ ModelRunner       │ │  └─ RemoteWorkerActor   ││
   │   │       └─ rakka_accel::*     │ └─────────────────────────┘│
   │   └─ ...                       │ uses:                      │
   └──────────────────────┘         │   RateLimiterActor (CRDT)  │
                                    │   CircuitBreakerActor      │
                                    │   RemoteSessionActor       │
                                    └────────────────────────────┘

The local-GPU tier rides on top of rakka-accel's substrate: DeviceActor, ContextActor, GpuRef<T>, GpuDispatcher, PerActorAllocator, PlacementActor, BlasActor/CudnnActor/etc. We don't reinvent two-tier supervision; we adopt rakka_accel::error::device_supervisor_strategy() and add the inference-specific Box<dyn ModelRunner> slot on top.

The remote-network tier is HTTP/2 + SSE + connection pooling, with distributed rate limiting via atomr_distributed_data::GCounter and circuit breaking + retry/backoff inside inference-remote-core.


Crate layout — pick what you need

The workspace is 18 crates plus xtask and the demo. Each layer is optional via Cargo features so you only compile what you use. Three recommended preset shapes:

Preset What you get What you skip
remote-only OpenAI + Anthropic + Gemini + LiteLLM + pipeline + rate-limiting / circuit-breaker / cost tracking All GPU code (cudarc, rakka-accel, candle, pyo3)
default-prod vLLM + TensorRT + ORT + OpenAI + Anthropic + pipeline Other GPU runtimes; LiteLLM; Gemini
all-runtimes Everything

Detailed feature matrix: docs/feature-matrix.md.

inference                                              ← rollup; one dep, feature-flag-driven
   │
   ├── inference-core                                  ← traits, types, no actor / GPU / HTTP deps
   │
   ├── inference-runtime                               ← gateway, request, dp-coordinator,
   │      [feature: local-gpu → rakka-accel]              engine-core, worker (two-tier),
   │                                                    placement, deployment-mgr, metrics
   │
   ├── inference-remote-core                           ← rate limiter (GCounter CRDT),
   │                                                    circuit breaker, retry/backoff,
   │                                                    SSE parser, session lifecycle
   │
   ├── inference-runtime-{openai, anthropic, gemini,   ← per-provider ModelRunner + cost table
   │   litellm}
   │
   ├── inference-runtime-{vllm, tensorrt, ort, candle, ← per-backend ModelRunner; feature-gated
   │   cudarc, mistralrs}                                so absent system libs don't break the
   │                                                    workspace build
   │
   ├── inference-python-bridge                         ← PythonGpuBridge + python-pinned dispatcher
   │      [feature: python → pyo3]                       (will lift to rakka-accel F4 — see TODO)
   │
   ├── inference-pipeline                              ← rakka-streams + re-export of
   │      [feature: cuda-patterns → rakka-accel-patterns] DynamicBatchingServer / InferenceCascade /
   │                                                    ModelReplicaPool / FairShareScheduler /
   │                                                    ModelHotSwapServer / SpeculativeDecoder
   │
   ├── inference-testkit                               ← MockRunner + wiremock-backed provider
   │                                                    mocks (inject_429, inject_5xx, ...)
   │
   ├── inference-cli                                   ← `rakka serve --config <toml>`
   │
   └── inference-py-bindings                           ← PyO3 bindings for Cluster / Deployment
          [feature: python]

How to add only the runtimes you need

# Just OpenAI + Anthropic, no GPU code, no Python:
inference = { workspace = true, features = ["openai", "anthropic", "pipeline"] }
# Local Candle + remote OpenAI fallback:
inference = { workspace = true, features = ["candle", "openai", "pipeline"] }
# (Pulls rakka-accel + cudarc + candle-* automatically via the `candle` feature.)
# Everything, including the testkit:
inference = { workspace = true, features = ["all-runtimes", "testkit"] }

The rollup's job is exactly this: make Cargo.toml declare intent and let the feature graph compute deps.


What you don't have to think about

  • Two-tier GPU supervision. local-gpu wires WorkerActor / ContextActor to rakka_accel::error::device_supervisor_strategy(). Sticky-error CUDA contexts get Restart; OOM gets Resume; unrecoverable failures Stop. No panic-string parsing in your code.
  • Distributed rate limits. RateLimiterActor shares its token-spent log across cluster nodes through atomr_distributed_data::GCounter. Two members calling OpenAI on the same API key collectively respect the bucket — no surprise 429 storms on scale-out.
  • Typed circuit-breaker propagation. When the breaker opens, the caller sees InferenceError::CircuitOpen { provider, opened_at_unix_ms, retry_at_unix_ms }. Fall back, surface a 429, or queue — without knowing whether the bottleneck was GPU memory or a remote outage.
  • Pipelines from blueprints, not threadpools. Enable cuda-patterns and inference::cuda_patterns::{DynamicBatchingServer, InferenceCascade, ModelReplicaPool, FairShareScheduler, ModelHotSwapServer, SpeculativeDecoder, MoeRouter} are one import away. Plug a closure into ModelRunner::execute and you've composed §9 of the architecture doc.
  • Compile-time dependency budgets. cargo build -p inference --features remote-only produces a binary with zero cudarc, zero rakka-accel, zero candle, zero pyo3 in the graph. Layered crates make the invariant load-bearing, not aspirational.
  • Hot credential rotation. RemoteSessionActor::rebuild drains in-flight requests on the old credential and routes new ones on the rotated value. Zero dropped traffic.

Developer experience

Six layers, surface up to depth

  1. Deployment value object. Most users never go deeper. runtime is auto-inferred from model name when omitted (gpt-* → openai, claude-* → anthropic, …).
  2. Per-runtime configs. OpenAiConfig, AnthropicConfig, GeminiConfig (Vertex + AI Studio), LiteLlmConfig, CandleConfig, VllmConfig, etc. for explicit overrides.
  3. <config>.toml project files. rakka serve --config foo.toml reads the §11.3 schema and applies every [[deployment]].
  4. Python decorators. @inference_actor for orchestration actors that compose deployments without touching a GPU directly. Skeleton in inference-py-bindings.
  5. Escape hatches. cluster.deployment("gpt-4o").rate_limiter(), .circuit_breaker(), .workers() — direct ActorRefs for incident response (force_open, rebuild_session, etc.).
  6. Raw rakka actors. When you need it, you have the full actor system underneath. Unprivileged.

Footgun-resistant by design

  • Secrets are typed. inference_core::SecretString (re-export of secrecy::SecretString) — won't Debug, won't Display, never appears in logs.
  • Rate-limit validation at deploy time. Catches a deployment claiming rpm = 100_000 against a free-tier API key with a typed error before the first user request hits.
  • Network egress checked at deploy time. The placement actor pings the provider from each chosen node before flipping the deployment to Serving.
  • Hot-swappable credentials. Updating the secret source triggers RemoteSessionActor::rebuild on the next pulse; in-flight requests drain on the old credential, new ones use the rotated value. Zero dropped traffic.
  • Cost guardrails. Budget { max_spend_per_hour_usd, on_exceeded: Reject } on a Deployment makes runaway provider spend physically impossible.

Verification

Every PR runs:

cargo build --workspace
cargo build -p inference --features remote-only          # zero GPU deps
cargo build -p inference --features cuda,cuda-patterns   # local + patterns
cargo build -p inference --features all-runtimes
cargo test --workspace
cargo run --bin remote_only_demo

The demo asserts the §13 Phase-1 + Phase-2c exit criteria end-to-end against a wiremock-driven OpenAI mock: happy-path streaming, 429 retry-after, and circuit-breaker open after consecutive 5xx.


Status

Layer Status
Foundation (inference-core) ✅ stable surface; serde round-trips for every RuntimeConfig variant
Runtime-agnostic actors ✅ gateway, request, dp-coordinator, engine-core, worker, placement, manager, metrics
Remote infrastructure ✅ rate limiter (CRDT), strict variant (singleton), circuit breaker, retry, SSE, session
OpenAI / Anthropic / Gemini / LiteLLM ✅ ModelRunner + wire types + error classification + pricing tables
Local Rust-native runtimes 🟡 trait satisfied; forward-pass bodies are stubs pinned to the doc's §13 Phase 2b roadmap
vLLM / TensorRT FFI 🟡 stubs that compile against the trait; full bodies on §13 Phase 2a/2b
Pipeline (rakka-streams + cuda-patterns) ✅ re-export shim + reference hybrid graph
CLI (rakka serve) ✅ TOML config → ActorSystem → gateway; cost-report/rotate-credentials are stubs
Python bindings 🟡 PyO3 skeleton (Cluster, Deployment); decorator surface deferred

AI-assisted development

If you're using Claude Code, Cursor, or another AI coding assistant on a project that depends on atomr-infer, install our ai-skills bundle — seven skills covering quickstart, choosing a runtime, wiring remote providers, composing pipelines, deployment, typed-error troubleshooting, and extending with a new backend.

/plugin marketplace add rustakka/atomr-infer
/plugin install atomr-infer-ai-skills@atomr-infer

Each SKILL.md is a thin router into the canonical docs (this README, the per-crate READMEs, the architecture RFC) so the skills stay in sync with the code instead of restating API surfaces that belong in rustdoc. Other harnesses (Cursor, Codex CLI, Gemini CLI, Aider, etc.) have install instructions in ai-skills/README.md.

Companion bundles for the broader stack:

  • rakka ai-skills — actor design, supervision, persistence, clustering, Python bindings.
  • rakka-accel ai-skills — DeviceActor, kernel selection, two-tier GPU supervision, backend choice.

Install all three when you're building a service that uses rakka primitives, rakka-accel GPU acceleration, and atomr-infer runtimes.


Release management

Releases are fully automated. Land a feat: / fix: commit on main and the version-bump workflow tags vX.Y.Z; the release workflow fires on the tag, runs cargo xtask verify, builds binaries for five platforms, generates release notes from git log, and publishes the allowlisted crates to crates.io in dependency order with idempotent retry.

Task How
Bump + tag based on Conventional Commits Auto on push to main via .github/workflows/version-bump.yml.
Force a specific version Release-As: x.y.z in commit footer.
Run the full release pipeline manually Actions → Release → Run workflow.
Dry-run before tagging Actions → Release → Run workflow → dry_run: true.
Inspect publishable vs gated crates cargo xtask release-checklist.
Audit anti-pattern regressions cargo xtask audit / cargo xtask audit --check.
Run the same checks CI runs cargo xtask verify.

Full operator runbook: RELEASING.md. Contributor guide: CONTRIBUTING.md.

License

Apache-2.0. See LICENSE once it lands; the workspace inherits the rakka project license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

atomr_infer-0.3.1.tar.gz (74.2 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

atomr_infer-0.3.1-cp313-cp313-win_amd64.whl (137.3 kB view details)

Uploaded CPython 3.13Windows x86-64

atomr_infer-0.3.1-cp313-cp313-musllinux_1_2_x86_64.whl (444.9 kB view details)

Uploaded CPython 3.13musllinux: musl 1.2+ x86-64

atomr_infer-0.3.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (231.1 kB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

atomr_infer-0.3.1-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (401.9 kB view details)

Uploaded CPython 3.13macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

atomr_infer-0.3.1-cp312-cp312-win_amd64.whl (137.3 kB view details)

Uploaded CPython 3.12Windows x86-64

atomr_infer-0.3.1-cp312-cp312-musllinux_1_2_x86_64.whl (444.9 kB view details)

Uploaded CPython 3.12musllinux: musl 1.2+ x86-64

atomr_infer-0.3.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (231.1 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

atomr_infer-0.3.1-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (402.0 kB view details)

Uploaded CPython 3.12macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

atomr_infer-0.3.1-cp311-cp311-win_amd64.whl (136.8 kB view details)

Uploaded CPython 3.11Windows x86-64

atomr_infer-0.3.1-cp311-cp311-musllinux_1_2_x86_64.whl (444.5 kB view details)

Uploaded CPython 3.11musllinux: musl 1.2+ x86-64

atomr_infer-0.3.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (230.6 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

atomr_infer-0.3.1-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (401.3 kB view details)

Uploaded CPython 3.11macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

atomr_infer-0.3.1-cp310-cp310-win_amd64.whl (137.0 kB view details)

Uploaded CPython 3.10Windows x86-64

atomr_infer-0.3.1-cp310-cp310-musllinux_1_2_x86_64.whl (444.6 kB view details)

Uploaded CPython 3.10musllinux: musl 1.2+ x86-64

atomr_infer-0.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (230.7 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

atomr_infer-0.3.1-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (401.5 kB view details)

Uploaded CPython 3.10macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

File details

Details for the file atomr_infer-0.3.1.tar.gz.

File metadata

  • Download URL: atomr_infer-0.3.1.tar.gz
  • Upload date:
  • Size: 74.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for atomr_infer-0.3.1.tar.gz
Algorithm Hash digest
SHA256 e1c853fde5a16a938415d14e67fcb08f8ea8007a62c5a498d27124663b4a87f8
MD5 4440ac9d44a3346bb5e1185a9c823cf9
BLAKE2b-256 3e14a4f8d1f558db9b95d747ed3ae7f3a005c7edf018d3b826c7cf6ee5418a7f

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1.tar.gz:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: atomr_infer-0.3.1-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 137.3 kB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for atomr_infer-0.3.1-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 0d9f80e534f1d85f004e34a473fef1c4be159461d7196273c5833ec111a88e05
MD5 6b3b88dd6b9b38f953bf16e1958f0ab9
BLAKE2b-256 c8a71fdfe5ac554df814a4a34c659b27ae6638bf5bfca2d0a85d77ffc60194b0

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp313-cp313-win_amd64.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp313-cp313-musllinux_1_2_x86_64.whl.

File metadata

File hashes

Hashes for atomr_infer-0.3.1-cp313-cp313-musllinux_1_2_x86_64.whl
Algorithm Hash digest
SHA256 27495fa21b282e03c8f63a4e3a2214160c92df528b6c844000f4a88a17d5296e
MD5 a86da006d61d59b7c6c82535ce56a03c
BLAKE2b-256 8620444594b9eb0ae10a3f3ee06f5ab0bf901a0bc5c4e8c68ed388c20a3d5dd6

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp313-cp313-musllinux_1_2_x86_64.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for atomr_infer-0.3.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 b46782c5f9840e7f0bf03be653220b07a0bbefa32c048c979112ee9d20254c85
MD5 802fbc7d85ca135b74e3d29608657783
BLAKE2b-256 1d1f7e28e8fc78d97fd3c32f684c6696d6e9b4f48b83925aa429ac4b9f64b122

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for atomr_infer-0.3.1-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 f8ab9ad7d4214c1b01481333cb6892eb3d81bb09e1952411334f932d2b380206
MD5 436308701593ae166a99c206e5b6a736
BLAKE2b-256 5ceb7cacb495bcf9ae77ace73e598eb0c8acf5b3a32205e95788ebe7e5f80a73

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: atomr_infer-0.3.1-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 137.3 kB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for atomr_infer-0.3.1-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 0cb44190d3b974b1d024bc7dd53678d0a1a5f2b14af9278b24c88c9c9d7a3755
MD5 395c9355e5499bd0d42db64f05d818ff
BLAKE2b-256 6d2615c60856b4edb7b88428ce549bfaada07ced6c5e5cb7b6cb7b35c08b687f

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp312-cp312-win_amd64.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp312-cp312-musllinux_1_2_x86_64.whl.

File metadata

File hashes

Hashes for atomr_infer-0.3.1-cp312-cp312-musllinux_1_2_x86_64.whl
Algorithm Hash digest
SHA256 6431841e51adc7d972a008bb30afbd5513b80111a44fb861fbe833feec9d2a62
MD5 1652508ab2f87e88c96c3fe5eaa68f23
BLAKE2b-256 12a21f8cc90796bec2bb7e60f4d4fa95701c5106500470dc927f13bf414fec7c

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp312-cp312-musllinux_1_2_x86_64.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for atomr_infer-0.3.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 929f0e23d258e60e78b063d413eb6be4360730f5e8e588eabbd52a9f05efb76e
MD5 f988204a71c2edcfb833148a0a74fd9b
BLAKE2b-256 f6ccc972430dd3696faa7f33339e177179ec484840a5563580ffe162e54b427f

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for atomr_infer-0.3.1-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 9d6214cf1cb9ab8984dd867d6acc2bd9f01576f296da99619f3207ddc9d29fa4
MD5 bd9be2e5e234cef597443f93c727c133
BLAKE2b-256 319a93e30f667b16054f6435d06f1bc006730e6d37e4b689d0cd6ce353b5dbef

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: atomr_infer-0.3.1-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 136.8 kB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for atomr_infer-0.3.1-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 e7467142de50386ecfd39479f9474885a8f2916935657eb62e9e215ed6193ab6
MD5 1e51710c4c073752a244631426166f72
BLAKE2b-256 88df4f856afd4b0b95a5c3c4fec03c06b35520ab615eff44a070a5d06e89f433

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp311-cp311-win_amd64.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp311-cp311-musllinux_1_2_x86_64.whl.

File metadata

File hashes

Hashes for atomr_infer-0.3.1-cp311-cp311-musllinux_1_2_x86_64.whl
Algorithm Hash digest
SHA256 42997410c87d3636039b7c9cf2a44290aeb296b16b486934d7e2b3c00ed308fd
MD5 503d9e995582fc1ea5c79dd855232913
BLAKE2b-256 c88fbdcc1d3dd1f4b8f5be98ee8da19c1e1728f919145174faa4cacd72f8e343

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp311-cp311-musllinux_1_2_x86_64.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for atomr_infer-0.3.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e097c631988bf041a0dd6158b15c5aa41441d276e0ff117dfdeac32c4495d8c2
MD5 b670ef4594999ff042ed94affe732486
BLAKE2b-256 893a30d6102b3edaf525b2b59b63938ae64a6ff42d1366fbb25d1d0a402076e2

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for atomr_infer-0.3.1-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 903ea44cca0b9ef49dce2ae249ce9152a9dd7a0ddc74445e839dcb39c77362b7
MD5 c277c28e157f90a528867c0ad3c9c929
BLAKE2b-256 f18c095022d116388818a0e4d033f06a3e262ae178c06906f08c467319cf950c

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: atomr_infer-0.3.1-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 137.0 kB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for atomr_infer-0.3.1-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 040ff2df54b7c20c4f1828ff2252bdc9e8a6aff1a27e00c57c1215d60d30784f
MD5 0e1f163ecffac52e5978d8436f9689d5
BLAKE2b-256 da0c65a5eca51a5c9d134e3b77380c8b8bbe4abd2ff7e2de10392b6714810c27

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp310-cp310-win_amd64.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp310-cp310-musllinux_1_2_x86_64.whl.

File metadata

File hashes

Hashes for atomr_infer-0.3.1-cp310-cp310-musllinux_1_2_x86_64.whl
Algorithm Hash digest
SHA256 c4e65b39dfeb299b1dec50db8172e196fe5df0a2560928cfdda72932d37f84b5
MD5 ea907628e5976e0d82d6d572242800d6
BLAKE2b-256 1fd0b1f408bcac52586d41f98fedbcfbdb6bd620c0715c81016f07ce4e3b0570

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp310-cp310-musllinux_1_2_x86_64.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for atomr_infer-0.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 9f1c8c3f9ab25b71294c96c8714fd858acddacb6ce806c18e57a5eae8ec8260c
MD5 0e541055cceb7072682cb6247ba24f77
BLAKE2b-256 00e8a921c8da17918a8a33b13e839164a0cfb403b5ea992a557d9a41c7ae1410

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file atomr_infer-0.3.1-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for atomr_infer-0.3.1-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 55e71349532b20836b8bd843f7db603eb76e6d2e172946e8ff3941a9dfeed981
MD5 36bea2f7e24b3db5252382a5c19f1bd3
BLAKE2b-256 58538e14306287d8fcc4baca2dc6169f4841d8a59727bca8e8960ac64b59e1bb

See more details on using hashes here.

Provenance

The following attestation bundles were made for atomr_infer-0.3.1-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl:

Publisher: release.yml on rustakka/atomr-infer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page