Skip to main content

Multi-runtime GPU + remote inference as a supervised actor system on the rakka actor runtime.

Project description

rakka-inference

One supervised actor topology for every place a model can run. Local GPU runtimes (vLLM, TensorRT, ONNX Runtime, Candle, cudarc, mistral.rs) and managed APIs (OpenAI, Anthropic, Gemini, LiteLLM) sit under the same routing CRDT, the same supervision tree, the same backpressure story. A request doesn't know — and doesn't need to — whether it landed on an H100 two racks away or in another company's data center.

[dependencies]
inference = { version = "0.2", features = ["openai", "anthropic", "candle", "pipeline"] }
use inference::prelude::*;

// Same value object describes a vLLM-on-4×H100 replica or a Gemini Vertex
// deployment. The `runtime` field is the only thing that changes —
// and it's auto-inferred from the model name when omitted.
let dep = Deployment {
    name: "gpt-4o-mini".into(),
    model: "gpt-4o-mini".into(),
    runtime: None,
    runtime_config: None,
    gpus: None,
    replicas: 1,
    serving: Serving::default(),
    budget: None,
    idempotent: true,
};

Built on rakka for actor supervision, clustering, and CRDTs, and on rakka-accel for two-tier GPU supervision. Cost, latency, and reliability stop being three pipelines and become one.


Why

Production AI rarely runs only on owned hardware. Frontier models, burst capacity, and compliance edge cases all push work onto managed APIs. Bolting providers onto a separate retry / rate-limit / observability stack from your local GPU pool fragments the system — and the cracks are exactly where 3 a.m. pages come from.

You'd otherwise hand-roll rakka-inference gives you
One routing layer for local pools, another for the API SDK Single routing CRDT — gpt-4o and llama-3.1-70b resolve through the same path
Per-process token buckets that 429 on cluster scale-out RateLimiterActor over rakka_distributed_data::GCounter — one bucket, all nodes
Hand-written retry / breaker / backoff per provider CircuitBreakerActor + jittered retry + content-filter triage, one strategy
Sticky CUDA-context recovery glued to async tasks rakka_accel::error::device_supervisor_strategy() adopted unchanged
Cascade graphs duct-taped from threadpools and channels InferenceCascade / DynamicBatchingServer / ModelReplicaPool actors
Credential rotation that drops in-flight traffic RemoteSessionActor::rebuild drains old, routes new — zero dropped requests
A no-GPU egress server that still pulls cudarc transitively --features remote-onlycudarc, rakka-accel, candle not in the graph
Cost guardrails as Slack alerts after the bill arrives Budget { max_spend_per_hour_usd, on_exceeded: Reject } enforced at the actor

Every concern that's normally a separate library or a separate incident is folded into one supervised graph with typed messages.


30-second tour

# Stand up an OpenAI-compatible gateway over real (or mocked) providers.
cargo run -p inference-cli --features all-remote -- serve --config demo.toml

# End-to-end demo (happy path / 429 retry / circuit-open) without
# spending a cent — wiremock under the hood.
cargo run --bin remote_only_demo

# Pure-remote binary, zero GPU deps in the graph.
cargo build -p inference --no-default-features --features remote-only

Architecture

The full design lives in docs/rustakka-inference-architecture-v4.md (1,459 lines, RFC v4). Short version:

                      [HTTP clients]
                            │
                            ▼
                   ApiGatewayActor                   runtime-agnostic
                            │ spawns one per request (inference-runtime)
                            ▼
                    RequestActor
                            │   ask(routing target)
                            ▼
                  DpCoordinatorActor                cluster-singleton
                            │   tell(AddRequest)
                            ▼
            ┌───────────────┴───────────────┐
            ▼                               ▼
   EngineCoreActor (LOCAL)          RemoteEngineCoreActor (REMOTE)
   ┌──────────────────────┐         ┌────────────────────────────┐
   │ scheduler/batcher    │         │ request queue (priority)   │
   │ kv_cache_mgr (LLM)   │         │ rate-limit-aware dispatch  │
   │ ModelExecutorActor   │         │ ┌─────────────────────────┐│
   │   ├─ WorkerActor     │         │ │ WorkerPool              ││
   │   │   └─ ContextActor│         │ │  ├─ RemoteWorkerActor   ││
   │   │       ├─ ModelRunner       │ │  └─ RemoteWorkerActor   ││
   │   │       └─ rakka_accel::*     │ └─────────────────────────┘│
   │   └─ ...                       │ uses:                      │
   └──────────────────────┘         │   RateLimiterActor (CRDT)  │
                                    │   CircuitBreakerActor      │
                                    │   RemoteSessionActor       │
                                    └────────────────────────────┘

The local-GPU tier rides on top of rakka-accel's substrate: DeviceActor, ContextActor, GpuRef<T>, GpuDispatcher, PerActorAllocator, PlacementActor, BlasActor/CudnnActor/etc. We don't reinvent two-tier supervision; we adopt rakka_accel::error::device_supervisor_strategy() and add the inference-specific Box<dyn ModelRunner> slot on top.

The remote-network tier is HTTP/2 + SSE + connection pooling, with distributed rate limiting via rakka_distributed_data::GCounter and circuit breaking + retry/backoff inside inference-remote-core.


Crate layout — pick what you need

The workspace is 18 crates plus xtask and the demo. Each layer is optional via Cargo features so you only compile what you use. Three recommended preset shapes:

Preset What you get What you skip
remote-only OpenAI + Anthropic + Gemini + LiteLLM + pipeline + rate-limiting / circuit-breaker / cost tracking All GPU code (cudarc, rakka-accel, candle, pyo3)
default-prod vLLM + TensorRT + ORT + OpenAI + Anthropic + pipeline Other GPU runtimes; LiteLLM; Gemini
all-runtimes Everything

Detailed feature matrix: docs/feature-matrix.md.

inference                                              ← rollup; one dep, feature-flag-driven
   │
   ├── inference-core                                  ← traits, types, no actor / GPU / HTTP deps
   │
   ├── inference-runtime                               ← gateway, request, dp-coordinator,
   │      [feature: local-gpu → rakka-accel]              engine-core, worker (two-tier),
   │                                                    placement, deployment-mgr, metrics
   │
   ├── inference-remote-core                           ← rate limiter (GCounter CRDT),
   │                                                    circuit breaker, retry/backoff,
   │                                                    SSE parser, session lifecycle
   │
   ├── inference-runtime-{openai, anthropic, gemini,   ← per-provider ModelRunner + cost table
   │   litellm}
   │
   ├── inference-runtime-{vllm, tensorrt, ort, candle, ← per-backend ModelRunner; feature-gated
   │   cudarc, mistralrs}                                so absent system libs don't break the
   │                                                    workspace build
   │
   ├── inference-python-bridge                         ← PythonGpuBridge + python-pinned dispatcher
   │      [feature: python → pyo3]                       (will lift to rakka-accel F4 — see TODO)
   │
   ├── inference-pipeline                              ← rakka-streams + re-export of
   │      [feature: cuda-patterns → rakka-accel-patterns] DynamicBatchingServer / InferenceCascade /
   │                                                    ModelReplicaPool / FairShareScheduler /
   │                                                    ModelHotSwapServer / SpeculativeDecoder
   │
   ├── inference-testkit                               ← MockRunner + wiremock-backed provider
   │                                                    mocks (inject_429, inject_5xx, ...)
   │
   ├── inference-cli                                   ← `rakka serve --config <toml>`
   │
   └── inference-py-bindings                           ← PyO3 bindings for Cluster / Deployment
          [feature: python]

How to add only the runtimes you need

# Just OpenAI + Anthropic, no GPU code, no Python:
inference = { workspace = true, features = ["openai", "anthropic", "pipeline"] }
# Local Candle + remote OpenAI fallback:
inference = { workspace = true, features = ["candle", "openai", "pipeline"] }
# (Pulls rakka-accel + cudarc + candle-* automatically via the `candle` feature.)
# Everything, including the testkit:
inference = { workspace = true, features = ["all-runtimes", "testkit"] }

The rollup's job is exactly this: make Cargo.toml declare intent and let the feature graph compute deps.


What you don't have to think about

  • Two-tier GPU supervision. local-gpu wires WorkerActor / ContextActor to rakka_accel::error::device_supervisor_strategy(). Sticky-error CUDA contexts get Restart; OOM gets Resume; unrecoverable failures Stop. No panic-string parsing in your code.
  • Distributed rate limits. RateLimiterActor shares its token-spent log across cluster nodes through rakka_distributed_data::GCounter. Two members calling OpenAI on the same API key collectively respect the bucket — no surprise 429 storms on scale-out.
  • Typed circuit-breaker propagation. When the breaker opens, the caller sees InferenceError::CircuitOpen { provider, opened_at_unix_ms, retry_at_unix_ms }. Fall back, surface a 429, or queue — without knowing whether the bottleneck was GPU memory or a remote outage.
  • Pipelines from blueprints, not threadpools. Enable cuda-patterns and inference::cuda_patterns::{DynamicBatchingServer, InferenceCascade, ModelReplicaPool, FairShareScheduler, ModelHotSwapServer, SpeculativeDecoder, MoeRouter} are one import away. Plug a closure into ModelRunner::execute and you've composed §9 of the architecture doc.
  • Compile-time dependency budgets. cargo build -p inference --features remote-only produces a binary with zero cudarc, zero rakka-accel, zero candle, zero pyo3 in the graph. Layered crates make the invariant load-bearing, not aspirational.
  • Hot credential rotation. RemoteSessionActor::rebuild drains in-flight requests on the old credential and routes new ones on the rotated value. Zero dropped traffic.

Developer experience

Six layers, surface up to depth

  1. Deployment value object. Most users never go deeper. runtime is auto-inferred from model name when omitted (gpt-* → openai, claude-* → anthropic, …).
  2. Per-runtime configs. OpenAiConfig, AnthropicConfig, GeminiConfig (Vertex + AI Studio), LiteLlmConfig, CandleConfig, VllmConfig, etc. for explicit overrides.
  3. <config>.toml project files. rakka serve --config foo.toml reads the §11.3 schema and applies every [[deployment]].
  4. Python decorators. @inference_actor for orchestration actors that compose deployments without touching a GPU directly. Skeleton in inference-py-bindings.
  5. Escape hatches. cluster.deployment("gpt-4o").rate_limiter(), .circuit_breaker(), .workers() — direct ActorRefs for incident response (force_open, rebuild_session, etc.).
  6. Raw rakka actors. When you need it, you have the full actor system underneath. Unprivileged.

Footgun-resistant by design

  • Secrets are typed. inference_core::SecretString (re-export of secrecy::SecretString) — won't Debug, won't Display, never appears in logs.
  • Rate-limit validation at deploy time. Catches a deployment claiming rpm = 100_000 against a free-tier API key with a typed error before the first user request hits.
  • Network egress checked at deploy time. The placement actor pings the provider from each chosen node before flipping the deployment to Serving.
  • Hot-swappable credentials. Updating the secret source triggers RemoteSessionActor::rebuild on the next pulse; in-flight requests drain on the old credential, new ones use the rotated value. Zero dropped traffic.
  • Cost guardrails. Budget { max_spend_per_hour_usd, on_exceeded: Reject } on a Deployment makes runaway provider spend physically impossible.

Verification

Every PR runs:

cargo build --workspace
cargo build -p inference --features remote-only          # zero GPU deps
cargo build -p inference --features cuda,cuda-patterns   # local + patterns
cargo build -p inference --features all-runtimes
cargo test --workspace
cargo run --bin remote_only_demo

The demo asserts the §13 Phase-1 + Phase-2c exit criteria end-to-end against a wiremock-driven OpenAI mock: happy-path streaming, 429 retry-after, and circuit-breaker open after consecutive 5xx.


Status

Layer Status
Foundation (inference-core) ✅ stable surface; serde round-trips for every RuntimeConfig variant
Runtime-agnostic actors ✅ gateway, request, dp-coordinator, engine-core, worker, placement, manager, metrics
Remote infrastructure ✅ rate limiter (CRDT), strict variant (singleton), circuit breaker, retry, SSE, session
OpenAI / Anthropic / Gemini / LiteLLM ✅ ModelRunner + wire types + error classification + pricing tables
Local Rust-native runtimes 🟡 trait satisfied; forward-pass bodies are stubs pinned to the doc's §13 Phase 2b roadmap
vLLM / TensorRT FFI 🟡 stubs that compile against the trait; full bodies on §13 Phase 2a/2b
Pipeline (rakka-streams + cuda-patterns) ✅ re-export shim + reference hybrid graph
CLI (rakka serve) ✅ TOML config → ActorSystem → gateway; cost-report/rotate-credentials are stubs
Python bindings 🟡 PyO3 skeleton (Cluster, Deployment); decorator surface deferred

AI-assisted development

If you're using Claude Code, Cursor, or another AI coding assistant on a project that depends on rakka-inference, install our ai-skills bundle — seven skills covering quickstart, choosing a runtime, wiring remote providers, composing pipelines, deployment, typed-error troubleshooting, and extending with a new backend.

/plugin marketplace add rustakka/rakka-inference
/plugin install rakka-inference-ai-skills@rakka-inference

Each SKILL.md is a thin router into the canonical docs (this README, the per-crate READMEs, the architecture RFC) so the skills stay in sync with the code instead of restating API surfaces that belong in rustdoc. Other harnesses (Cursor, Codex CLI, Gemini CLI, Aider, etc.) have install instructions in ai-skills/README.md.

Companion bundles for the broader stack:

  • rakka ai-skills — actor design, supervision, persistence, clustering, Python bindings.
  • rakka-accel ai-skills — DeviceActor, kernel selection, two-tier GPU supervision, backend choice.

Install all three when you're building a service that uses rakka primitives, rakka-accel GPU acceleration, and rakka-inference runtimes.


Release management

Releases are fully automated. Land a feat: / fix: commit on main and the version-bump workflow tags vX.Y.Z; the release workflow fires on the tag, runs cargo xtask verify, builds binaries for five platforms, generates release notes from git log, and publishes the allowlisted crates to crates.io in dependency order with idempotent retry.

Task How
Bump + tag based on Conventional Commits Auto on push to main via .github/workflows/version-bump.yml.
Force a specific version Release-As: x.y.z in commit footer.
Run the full release pipeline manually Actions → Release → Run workflow.
Dry-run before tagging Actions → Release → Run workflow → dry_run: true.
Inspect publishable vs gated crates cargo xtask release-checklist.
Audit anti-pattern regressions cargo xtask audit / cargo xtask audit --check.
Run the same checks CI runs cargo xtask verify.

Full operator runbook: RELEASING.md. Contributor guide: CONTRIBUTING.md.

License

Apache-2.0. See LICENSE once it lands; the workspace inherits the rakka project license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rakka_inference-0.2.6.tar.gz (74.7 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

rakka_inference-0.2.6-cp313-cp313-win_amd64.whl (137.3 kB view details)

Uploaded CPython 3.13Windows x86-64

rakka_inference-0.2.6-cp313-cp313-musllinux_1_2_x86_64.whl (445.0 kB view details)

Uploaded CPython 3.13musllinux: musl 1.2+ x86-64

rakka_inference-0.2.6-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (231.2 kB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

rakka_inference-0.2.6-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (402.0 kB view details)

Uploaded CPython 3.13macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

rakka_inference-0.2.6-cp312-cp312-win_amd64.whl (137.3 kB view details)

Uploaded CPython 3.12Windows x86-64

rakka_inference-0.2.6-cp312-cp312-musllinux_1_2_x86_64.whl (445.0 kB view details)

Uploaded CPython 3.12musllinux: musl 1.2+ x86-64

rakka_inference-0.2.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (231.2 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

rakka_inference-0.2.6-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (402.1 kB view details)

Uploaded CPython 3.12macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

rakka_inference-0.2.6-cp311-cp311-win_amd64.whl (136.9 kB view details)

Uploaded CPython 3.11Windows x86-64

rakka_inference-0.2.6-cp311-cp311-musllinux_1_2_x86_64.whl (444.6 kB view details)

Uploaded CPython 3.11musllinux: musl 1.2+ x86-64

rakka_inference-0.2.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (230.7 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

rakka_inference-0.2.6-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (401.4 kB view details)

Uploaded CPython 3.11macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

rakka_inference-0.2.6-cp310-cp310-win_amd64.whl (137.1 kB view details)

Uploaded CPython 3.10Windows x86-64

rakka_inference-0.2.6-cp310-cp310-musllinux_1_2_x86_64.whl (444.6 kB view details)

Uploaded CPython 3.10musllinux: musl 1.2+ x86-64

rakka_inference-0.2.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (230.7 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

rakka_inference-0.2.6-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (401.5 kB view details)

Uploaded CPython 3.10macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

File details

Details for the file rakka_inference-0.2.6.tar.gz.

File metadata

  • Download URL: rakka_inference-0.2.6.tar.gz
  • Upload date:
  • Size: 74.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for rakka_inference-0.2.6.tar.gz
Algorithm Hash digest
SHA256 680d5b20849f261bf3637ef756b0344f25b0ea8015c636509c273c0392759621
MD5 4e986581c0a4e9419069c64523674f8d
BLAKE2b-256 b1b82294ff5fefae7430af50fd436e9050224cd28f172ee1e1fb2f9e1503eece

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6.tar.gz:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp313-cp313-win_amd64.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 4bc20b4b1cc8970d80929364cca0e0f84113b2f4a53e9cb551f52ce5049c663f
MD5 f910db6f6308ae7876fe7b1951487a1f
BLAKE2b-256 7c3de60e41d908877467a004cfd154a870c1b7eb7ed5cdc187657106b9193644

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp313-cp313-win_amd64.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp313-cp313-musllinux_1_2_x86_64.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp313-cp313-musllinux_1_2_x86_64.whl
Algorithm Hash digest
SHA256 6cc037606d8cb25c4bb7a457e174c8d83f2407df6e313b0f9be30720a002d7bd
MD5 7b2784902e188d583882daa38b317a7c
BLAKE2b-256 0c3d9e586eb9b7f7351076cfc405116486ecaf45fe45a8cb610d0c3eb06e7784

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp313-cp313-musllinux_1_2_x86_64.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c98681131e4e61a693a5aac75473097cb5a8c6280547892349aa5fdc8b9ee497
MD5 1d5e153d9f16bfaf46fb130bbf8e6aba
BLAKE2b-256 a500a8773efd8143da4a20e5819f588b4acf86abaf0ecae1a0eb442abe00b1cd

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 1143771b6ebe0d8ba46e53a092cb5ea72b72c06f720c5e56e080839353229a19
MD5 2972ea46bc86ace05e6662d4cb0539e7
BLAKE2b-256 df4ee12606c3531dc7d015eb816860ed0f80ee40bda57665ee6be61ba2567d34

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 d2bd3b0b843a3e062fae38aac1410e4a19ff34cf79deb4b24f6bd67d27281ad2
MD5 42d2df0a5ee99d47b60697c7887cb9b0
BLAKE2b-256 ec0243cb40d10be1b59ec7a8846a36ba8c5ae0ff14820781e7daea11f13db0b2

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp312-cp312-win_amd64.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp312-cp312-musllinux_1_2_x86_64.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp312-cp312-musllinux_1_2_x86_64.whl
Algorithm Hash digest
SHA256 60c8231d253eebf391a26975224fcc185cca0885a180d444831c6c42e5e598e3
MD5 eaa7f25aae4f19c77b1753b41ca632d0
BLAKE2b-256 67ceeed3d793ada3c0866a7b9bd3fda90c2ea32ae1e1b383987028d83f7cd010

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp312-cp312-musllinux_1_2_x86_64.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ff117b02e027f29ba63c6268d04d36f4ae98b40c1430b2348ee3ba1c00e48f02
MD5 356dea9a8f8d527fc32499761787d790
BLAKE2b-256 ea4d084c002132f2e91771b79d3c94120b0303a79b4fb2fa6ae5d1ccae814af4

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 ef7f538c0a172e4cf1e3731366656213736bc689b8e10fd8fd268f8ff3d6c618
MD5 b7bcb3bea62506cb60a2f9625928508f
BLAKE2b-256 e464161ef8a9744050fc9dc5417bd66c61be5ba0503f83ef98ce6698430f66eb

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 d32f501eca19e3431ce673e81d1737a2dd1751a3120ea257aa19f64a686d958f
MD5 b615a32fecdbf0923bbd859036a6de01
BLAKE2b-256 883ab7064374666c007d1c2d1eeef6ac92e2b795f1b2ab56aa3c99118926177f

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp311-cp311-win_amd64.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp311-cp311-musllinux_1_2_x86_64.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp311-cp311-musllinux_1_2_x86_64.whl
Algorithm Hash digest
SHA256 49bf21f30aa8c93983c6d8d63958421857707968e7a2389051ff48ec2f4e1d0f
MD5 34936977f8e6c9430420e48eb0f19326
BLAKE2b-256 10cefa3d6602e2b8dbca80fbb5dd0a1c224497213eb216958e0f09a4988aeea7

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp311-cp311-musllinux_1_2_x86_64.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 a28ae9b41ee35c3fd5a47e0e1cee3836b910df9cff8e4bfe1295af6252383ad9
MD5 1b735eaf977d7f78d140b5bb8929ae4d
BLAKE2b-256 ee2cc8f73e872494bf3b7dcbf40a305758c063670f383b4c0fd2d6e7ccda8ff3

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 5236f1104a0e93b677b0b29964863a9499038f1ab1df6d92673d6d1290a2046c
MD5 4b040344302c2c94a9e6ef16f13cf4c6
BLAKE2b-256 063e65fe280ce35e15ecea3f6274928803144edced582a54c08eedf9beda6089

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 09ca404c393e8ca6be848fa29289bb9b4826303a9c992e57fcef0ac01803833a
MD5 47f05631529f531508657684c1c31573
BLAKE2b-256 34d1062961dc9af05174df176955f459efc77312bbe37421e4c63ed3e7ebda85

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp310-cp310-win_amd64.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp310-cp310-musllinux_1_2_x86_64.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp310-cp310-musllinux_1_2_x86_64.whl
Algorithm Hash digest
SHA256 a46b16d7ce2faaaea280c52fec78131778c484c1172e29b848c8cc8454ddae3e
MD5 b3491dfa6ffb4d4807f7a986ea718032
BLAKE2b-256 4b23831d6981cfc107eed8bc0b46d8faf1e8aed62cebaee24b830915ccaee436

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp310-cp310-musllinux_1_2_x86_64.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c95b30242a42e1d99304fd2f5c84841ff463078d9837be766aa8376b394eac8b
MD5 b72322f50d0caeda9a77ec81423da692
BLAKE2b-256 7837ee96c76ccc73bfdafe23f921d192e4c1b83bd0bc8076239126a290a05cd0

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rakka_inference-0.2.6-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for rakka_inference-0.2.6-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 30ac77361456b58effe9fc049f9d96600642c37965958e268282eab9dec185e5
MD5 549b069a99a25a35ae91ff32cfd45531
BLAKE2b-256 7e73c9215aff619b354a319960e63e885395d04c2adb8c3357c62f5c26a8a293

See more details on using hashes here.

Provenance

The following attestation bundles were made for rakka_inference-0.2.6-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl:

Publisher: release.yml on rustakka/rakka-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page