Skip to main content

High-performance, schema-agnostic event bus for AI runtime workloads

Project description

Net Python

High-performance, schema-agnostic event bus for AI runtime workloads.

Installation

pip install ai2070-net

The package publishes as ai2070-net on PyPI but imports as from net import ... (the in-source module name is preserved). For the higher-level Pythonic surface (generators, typed channels, dataclass/Pydantic support), install ai2070-net-sdk instead — it depends on this package.

Quick Start

from net import Net

# Create event bus (defaults to CPU core count shards)
bus = Net()

# Ingest events - fast path with raw JSON strings (23M+ ops/sec)
bus.ingest_raw('{"token": "hello", "index": 0}')

# Or use dict for convenience (4M+ ops/sec)
bus.ingest({"token": "world", "index": 1})

# Batch ingestion for maximum throughput
events = [f'{{"token": "tok_{i}"}}' for i in range(10000)]
count = bus.ingest_raw_batch(events)

# Poll events
response = bus.poll(limit=100)
for event in response:
    print(event.raw)
    # Or parse to dict
    data = event.parse()

# Check stats
stats = bus.stats()
print(f"Ingested: {stats.events_ingested}, Dropped: {stats.events_dropped}")

# Shutdown
bus.shutdown()

Context Manager

with Net(num_shards=4) as bus:
    bus.ingest_raw('{"data": "value"}')
# Automatically shuts down

Configuration

bus = Net(
    num_shards=8,                    # Number of parallel shards
    ring_buffer_capacity=1_048_576,  # Events per shard (must be power of 2)
    backpressure_mode="drop_oldest", # What to do when full
)

Net Encrypted UDP Transport

Net provides encrypted point-to-point UDP transport for high-performance scenarios:

from net import Net, generate_net_keypair
import os

# Generate keypair for responder
keypair = generate_net_keypair()
psk = os.urandom(32).hex()

# Responder side
responder = Net(
    num_shards=2,
    net_bind_addr='127.0.0.1:9001',
    net_peer_addr='127.0.0.1:9000',
    net_psk=psk,
    net_role='responder',
    net_secret_key=keypair.secret_key,
    net_public_key=keypair.public_key,
    net_reliability='light',  # 'none', 'light', or 'full'
)

# Initiator side (knows responder's public key)
initiator = Net(
    num_shards=2,
    net_bind_addr='127.0.0.1:9000',
    net_peer_addr='127.0.0.1:9001',
    net_psk=psk,
    net_role='initiator',
    net_peer_public_key=keypair.public_key,
)

# Use as normal
initiator.ingest_raw('{"event": "data"}')

Backpressure Modes

  • "drop_newest" - Reject new events when buffer is full
  • "drop_oldest" - Evict oldest events to make room
  • "fail_producer" - Raise an error

NAT traversal (optimization, not correctness)

Two NATed peers already reach each other through the mesh's routed-handshake path. NAT traversal opens a shorter direct path when the NAT shape allows it; it's never required for connectivity. Every method below is safe to call regardless of NAT type — a failed punch or a traversal: * RuntimeError is not a connectivity failure, traffic keeps riding the relay. The whole surface is a no-op when the native module was built without --features nat-traversal: every call raises RuntimeError("traversal: unsupported").

from net import NetMesh

mesh = NetMesh(bind_addr="0.0.0.0:9000", psk="00" * 32)

mesh.reclassify_nat()

klass  = mesh.nat_type()            # "open" | "cone" | "symmetric" | "unknown"
reflex = mesh.reflex_addr()         # "203.0.113.5:9001" or None

observed = mesh.probe_reflex(peer_node_id)   # "ip:port"

# Attempt a direct connection via the pair-type matrix.
# `coordinator` mediates the punch when the matrix picks one.
# Always returns — inspect stats to learn which path won.
mesh.connect_direct(peer_node_id, peer_pubkey_hex, coordinator_node_id)

# Cumulative counters — all int, monotonic.
s = mesh.traversal_stats()
s.punches_attempted   # coordinator mediated a PunchRequest + Introduce
s.punches_succeeded   # ack arrived AND direct handshake landed
s.relay_fallbacks     # landed on the routed path after skip/fail

Operators with a known-public address skip the classifier sweep entirely. The override pins "open" + the supplied address on every capability announcement; call announce_capabilities() after to propagate (the setter resets the rate-limit floor so the next announce is guaranteed to broadcast).

mesh.set_reflex_override('203.0.113.5:9001')
mesh.announce_capabilities(caps)
# later:
mesh.clear_reflex_override()
mesh.announce_capabilities(caps)

Traversal failures surface as RuntimeError with a stable traversal: <kind>[: <detail>] message prefix. The <kind> discriminator is one of reflex-timeout | peer-not-reachable | transport | rendezvous-no-relay | rendezvous-rejected | punch-failed | port-map-unavailable | unsupported. Match on the prefix for machine-readable branching:

try:
    mesh.connect_direct(peer_node_id, peer_pubkey_hex, coord_id)
except RuntimeError as e:
    msg = str(e)
    if msg.startswith("traversal: unsupported"):
        ...   # native module built without --features nat-traversal
    elif msg.startswith("traversal: peer-not-reachable"):
        ...

"unsupported" is the signal that the bindings are linked unconditionally and the native module doesn't have the feature — callers can branch cleanly without probing for symbol presence.

Channels (distributed pub/sub)

Named pub/sub over the encrypted mesh. Publishers register channels with access policy; subscribers ask to join via a membership subprotocol; publish fans payloads out to every current subscriber.

from net import NetMesh, ChannelAuthError, ChannelError

pub = NetMesh('127.0.0.1:9001', '42' * 32)
try:
    pub.register_channel(
        'sensors/temp',
        visibility='global',      # or 'subnet-local' | 'parent-visible' | 'exported'
        reliable=True,
        priority=2,
        max_rate_pps=1000,
    )

    # Subscriber side (after handshake with pub):
    # sub.subscribe_channel(pub.node_id, 'sensors/temp')

    # Fan a payload out to all subscribers.
    report = pub.publish(
        'sensors/temp',
        b'{"celsius": 22.5}',
        reliability='reliable',
        on_failure='best_effort',
        max_inflight=32,
    )
    print(f"{report['delivered']}/{report['attempted']} subscribers received")
finally:
    pub.shutdown()

# Typed errors for ACL outcomes:
# try: sub.subscribe_channel(peer_id, 'restricted')
# except ChannelAuthError: ...   # publisher denied
# except ChannelError: ...       # unknown channel / other rejection

Channel names always cross the binding as strings (not the u16 hash) to avoid ACL bypass via collision. The Python binding does not yet expose a dedicated per-channel receive API; that is a follow-up.

CortEX & NetDb (event-sourced state)

Typed, event-sourced state on top of RedEX — tasks and memories with filterable queries and sync watch iterators. Includes the snapshot_and_watch primitive whose race fix landed on v2, so you can safely "paint what's there now, then react to changes" without losing updates that race during construction.

from net import NetDb, CortexError

db = NetDb.open(origin_hash=0xABCDEF01, with_tasks=True, with_memories=True)
tasks = db.tasks

try:
    seq = tasks.create(1, 'write docs', 100)
    tasks.wait_for_seq(seq)   # block until the fold has applied
except CortexError as e:
    # adapter-level failure (RedEX I/O, fold halted, etc.)
    ...

# Snapshot + watch, one atomic call — no race.
snap, it = tasks.snapshot_and_watch_tasks(status='pending')
print('initial:', len(snap), 'pending tasks')
for batch in it:
    print('update:', len(batch), 'pending tasks')
    if len(batch) == 0:
        it.close()    # idempotent; ends the iterator
        break

db.close()

Standalone adapters

If you only need one model, skip the NetDb facade:

from net import Redex, TasksAdapter

redex = Redex(persistent_dir='/var/lib/net/redex')
tasks = TasksAdapter.open(redex, origin_hash=0xABCDEF01, persistent=True)

MemoriesAdapter exposes the same shape with store / retag / pin / unpin / delete / list_memories / watch_memories / snapshot_and_watch_memories.

Raw RedEX file (no CortEX fold)

For domain-agnostic persistent logs — your own event schema, no fold, no typed adapter — open a RedexFile directly from a Redex. The tail is a sync Python iterator; call close() or let StopIteration fire when the file closes.

from net import Redex, RedexError

redex = Redex(persistent_dir='/var/lib/net/events')
file = redex.open_file(
    'analytics/clicks',
    persistent=True,
    fsync_interval_ms=100,           # or fsync_every_n=1000
    retention_max_events=1_000_000,
)

# Append (or batch-append).
seq = file.append(b'{"url": "/home"}')
# `append_batch` returns the first-seq int of the batch, or `None`
# for an empty input. The `None` return is the explicit "I
# appended nothing" signal — pre-`bugfixes-8` it returned `0`,
# which collided with the legitimate "first event of a non-empty
# batch landed at seq 0" return.
first = file.append_batch([b'{"a": 1}', b'{"a": 2}'])

# Tail — backfills the retained range, then streams live appends.
try:
    for event in file.tail(from_seq=0):
        print(event.seq, bytes(event.payload))
        if should_stop:
            break           # idempotent; ends the iterator via close()
except RedexError as e:
    ...

file.close()

Errors from the RedEX surface raise RedexError (invalid channel name, bad config, append / tail / sync / close failures).

Why snapshot_and_watch_*?

Calling list_tasks() then watch_tasks() takes two independent state reads. A mutation landing between them would be silently lost under the old skip(1) implementation. The atomic primitive returns the snapshot and an iterator seeded so that any divergent initial emission is forwarded through instead of dropped — see docs/STORAGE_AND_CORTEX.md.

Redis Streams consumer-side dedup helper

The Net Redis adapter writes a stable dedup_id field on every XADD entry: {producer_nonce:hex}:{shard_id}:{sequence_start}:{i}. Combined with the bus's persistent producer-nonce path (producer_nonce_path on EventBusConfig), the id is stable across both within-process retries AND cross-process restart — the MULTI/EXEC-timeout race becomes filterable at consume time.

RedisStreamDedup is the consumer-side helper, exposed on the net PyO3 module:

from net import RedisStreamDedup
import redis

# ~10k events/sec * 1 min dedup window → ~600,000.
dedup = RedisStreamDedup(capacity=600_000)

r = redis.Redis(host="localhost", port=6379)
cursor = "0"
while True:
    # XRANGE bounds are INCLUSIVE on both ends. After the first
    # page we must use the exclusive form `(<id>` so we don't
    # re-read the entry the cursor points at — a vanilla
    # `min=cursor` loop spins forever once the cursor reaches the
    # tail and the same entry is returned every iteration.
    start = cursor if cursor == "0" else f"({cursor}"
    entries = r.xrange("net:shard:0", min=start, max="+", count=100)
    for entry_id, fields in entries:
        dedup_id = fields.get(b"dedup_id", b"").decode()
        if not dedup_id:
            # No dedup_id → older entry or non-Net producer; skip
            # dedup and process as-is.
            process(entry_id, fields)
            continue
        if not dedup.is_duplicate(dedup_id):
            process(entry_id, fields)
        cursor = entry_id.decode()
    if not entries:
        break

Surface:

dedup = RedisStreamDedup()                # default capacity 4096
dedup = RedisStreamDedup(capacity=N)      # explicit; 0 → 1
dedup.is_duplicate(id: str) -> bool       # test-and-insert
dedup.len                                 # property — tracked-id count
dedup.capacity                            # property — configured cap
dedup.is_empty                            # property
dedup.clear()                             # reset (e.g. on consumer-group rebalance)

The helper is transport-agnostic — bring your own redis-py / aioredis / equivalent client; it just answers the dedup question against an in-memory LRU. Concurrency: each handle wraps a Rust Mutex<RedisStreamDedup>, so concurrent calls from multiple Python threads are safe but serialize. Production-shape is one helper per consumer thread.

Security Surface (Stage A–E)

The mesh layer surfaces the same identity / capabilities / subnets / channel-auth story that the Rust SDK and the TypeScript / Node SDKs ship. Full staging and rationale: docs/SDK_SECURITY_SURFACE_PLAN.md. Python-binding parity details: docs/SDK_PYTHON_PARITY_PLAN.md.

Identity + permission tokens

Every node has an ed25519 identity; permission tokens are ed25519- signed delegations that authorize a subject to publish / subscribe / delegate / admin on a channel, optionally with further delegation depth.

from net import Identity, parse_token, verify_token, delegate_token

alice = Identity.generate()
bob = Identity.generate()

# Alice issues Bob a subscribe+delegate token good for 5 min, with
# one re-delegation hop remaining. `ttl_seconds=0` raises
# `TokenError` — a zero TTL would mint a born-expired token that
# every receiver would reject as `Expired`, leaving the issuer to
# diagnose the misuse from receiver-side log lines.
token = alice.issue_token(
    subject=bob.entity_id,
    scope=["subscribe", "delegate"],
    channel="sensors/temp",
    ttl_seconds=300,
    delegation_depth=1,
)
assert verify_token(token) is True

# Bob re-delegates to Carol; depth drops to 0 (leaf).
carol = Identity.generate()
child = delegate_token(bob, token, carol.entity_id, ["subscribe"])
assert parse_token(child)["delegation_depth"] == 0

Capability announcements + peer discovery

Announce hardware / software / model / tool / tag fingerprints, then query the local capability index with a filter.

mesh.announce_capabilities({
    "hardware": {
        "cpu_cores": 16,
        "memory_mb": 65536,
        "gpu": {"vendor": "nvidia", "model": "h100", "vram_mb": 81920},
    },
    "models": [{"model_id": "llama-3.1-70b", "family": "llama",
                "context_length": 128_000}],
    "tags": ["gpu", "prod"],
})

gpu_peers = mesh.find_nodes({
    "require_gpu": True,
    "gpu_vendor": "nvidia",
    "min_vram_mb": 40_000,
})

Scoped discovery (reserved scope:* tags)

A provider can narrow who its query result reaches by tagging its CapabilitySet with reserved scope:* tags. Queries call mesh.find_nodes_scoped(filter, scope) to filter candidates. The wire format and forwarders are untouched — enforcement is purely query-side.

# GPU pool advertised to one tenant only.
mesh.announce_capabilities({
    "tags": ["model:llama3-70b", "scope:tenant:oem-123"],
})

# Tenant-scoped query — returns this node + any Global (untagged) peers.
oem_nodes = mesh.find_nodes_scoped(
    {"require_tags": ["model:llama3-70b"]},
    {"kind": "tenant", "tenant": "oem-123"},
)

Accepted scope dict shapes: {"kind": "any"} (default), {"kind": "global_only"}, {"kind": "same_subnet"}, {"kind": "tenant", "tenant": "<id>"}, {"kind": "tenants", "tenants": [...]}, {"kind": "region", "region": "<name>"}, {"kind": "regions", "regions": [...]}. Reserved announcement tags: scope:subnet-local (visible only under same_subnet), scope:tenant:<id>, scope:region:<name> — strictest scope wins. Untagged peers resolve to Global and stay visible under permissive queries. Full design: docs/SCOPED_CAPABILITIES_PLAN.md.

Capability propagation is multi-hop, bounded by MAX_CAPABILITY_HOPS = 16 with (origin, version) dedup on every forwarder. capability_gc_interval_ms controls both the index TTL sweep and the dedup cache eviction. See docs/MULTIHOP_CAPABILITY_PLAN.md.

Subnets

Nodes can bind to a hierarchical SubnetId (1–4 levels, each 0–255) directly, or derive one from announced tags via a SubnetPolicy.

# Explicit subnet.
mesh = NetMesh("127.0.0.1:9000", PSK, subnet=[3, 7, 2])

# Or derive from tags.
mesh = NetMesh(
    "127.0.0.1:9001", PSK,
    subnet_policy={
        "rules": [
            {"tag_prefix": "region:", "level": 0,
             "values": {"eu": 1, "us": 2, "apac": 3}},
            {"tag_prefix": "zone:", "level": 1,
             "values": {"a": 1, "b": 2, "c": 3}},
        ]
    },
)

Channel authentication

Publishers set publish_caps / subscribe_caps / require_token on register_channel. Subscribers present a PermissionToken via the optional token=bytes kwarg on subscribe_channel.

mesh.register_channel(
    "gpu/jobs",
    subscribe_caps={"require_gpu": True, "min_vram_mb": 16_000},
    require_token=True,
)

# Subscriber side, with a token issued by the publisher:
mesh.subscribe_channel(publisher_node_id, "gpu/jobs", token=token_bytes)

Denied subscribes raise ChannelAuthError (a subclass of ChannelError); malformed tokens raise TokenError whose message has the form "token: <kind>" (invalid_signature, expired, delegation_exhausted, …). Successful subscribes populate an AuthGuard bloom filter on the publisher so every subsequent publish admits the subscriber in constant time. An expiry sweep (default 30 s) evicts subscribers whose tokens age out; a per- peer auth-failure rate limiter throttles bad-token storms. Cross- SDK behaviour is fixed by the Rust integration suite; see tests/channel_auth.rs and tests/channel_auth_hardening.rs.

Compute (daemons + migration)

Run MeshDaemons directly from Python. DaemonRuntime owns the factory table, the per-daemon hosts, and the Registering → Ready → ShuttingDown lifecycle gate that decides when inbound migrations may land. Daemons are any Python object whose process(event) returns a list of bytes/bytearray payloads — the runtime wraps each output in a causal link and forwards it.

Build the native module with the compute feature (maturin picks it up on the default build) and import from net. Full design notes: docs/SDK_COMPUTE_SURFACE_PLAN.md.

from net import DaemonRuntime, NetMesh, Identity, CausalEvent


class EchoDaemon:
    """Stateless echo — ships every event's payload straight back."""

    name = "echo"

    def process(self, event: CausalEvent) -> list[bytes]:
        return [bytes(event.payload)]

    # Optional: snapshot() / restore(state) for migration-capable daemons.


mesh = NetMesh("127.0.0.1:9000", "42" * 32)
rt = DaemonRuntime(mesh)

# 1. Register factories BEFORE flipping the runtime to Ready.
rt.register_factory("echo", lambda: EchoDaemon())

# 2. Ready the runtime — after this point spawn / migration accept.
rt.start()

# 3. Spawn a daemon; Identity pins the ed25519 keypair so
#    origin_hash / entity_id stay stable across migrations.
identity = Identity.generate()
handle = rt.spawn("echo", identity)
print(f"origin = 0x{handle.origin_hash:08x}")

# 4. Manually feed an event for testing; real delivery happens
#    via the mesh's causal chain.
event = CausalEvent(handle.origin_hash, sequence=1, payload=b"hello")
outputs = rt.deliver(handle.origin_hash, event)

# 5. Clean shutdown.
rt.stop(handle.origin_hash)
rt.shutdown()

process must be synchronous — the core's contract is sync, and the PyO3 bridge re-attaches the GIL for the duration of each call. Raising inside process surfaces as DaemonError on the caller.

Migration

start_migration(origin_hash, source_node, target_node) orchestrates the six-phase cutover (Snapshot → Transfer → Restore → Replay → Cutover → Complete). The source seals the daemon's ed25519 seed into the outbound snapshot using the target's X25519 static pubkey; the target rebuilds the daemon via the factory registered under the same kind, replays any events that arrived during transfer, then activates.

from net import MigrationError, migration_error_kind

try:
    mig = rt.start_migration(handle.origin_hash, src_node_id, dst_node_id)
    # mig.phase  — "snapshot" | "transfer" | "restore" | ...
    # mig.source_node / mig.target_node
    mig.wait()                      # blocks to completion
except MigrationError as e:
    kind = migration_error_kind(e)
    if kind == "not-ready":               ...  # target start() didn't run
    elif kind == "factory-not-found":     ...  # target missing this kind
    elif kind == "compute-not-supported": ...  # target has no DaemonRuntime
    elif kind == "state-failed":          ...  # snapshot / restore threw
    elif kind == "identity-transport-failed": ...  # seal / unseal failed
    # ...see SDK_COMPUTE_SURFACE_PLAN.md for the full enum

start_migration_with(origin, src, dst, opts) exposes options such as seal_seed=False for test scenarios. On the target node, call rt.register_migration_target_identity(kind, identity) before any migration of that kind lands; without it the runtime rejects sealed-seed envelopes with migration_error_kind == "identity-transport-failed".

Surface at a glance

Method Description
DaemonRuntime(mesh) Construct against an existing NetMesh
rt.register_factory(kind, fn) Install a factory (before start())
rt.start() / rt.shutdown() Flip the lifecycle gate
rt.spawn(kind, identity, config=None) Spawn a local daemon
rt.spawn_from_snapshot(kind, identity, bytes, config=None) Rehydrate
rt.stop(origin) Stop a local daemon
rt.snapshot(origin) Capture bytes for persistence / migration
rt.deliver(origin, event) Feed an event (returns list[bytes])
rt.start_migration(origin, src, dst) Orchestrate a live migration
rt.register_migration_target_identity(kind, id) Pin unseal keypair on target for kind
handle.origin_hash / entity_id / stats() Per-daemon identity + stats
DaemonError / MigrationError Typed exceptions; migration_error_kind(e) parses e.kind

Compute Groups (Replica / Fork / Standby)

HA / scaling overlays on top of DaemonRuntime. Build the native module with the groups feature (implies compute) to expose ReplicaGroup, ForkGroup, StandbyGroup, and the GroupError exception class.

  • ReplicaGroup — N interchangeable copies of a daemon. Deterministic identity from group_seed + index, so a replacement respawned on another node has a stable origin_hash. Load-balances inbound events across healthy members; auto-replaces on node failure.
  • ForkGroup — N independent daemons forked from a common parent at fork_seq. Unique keypairs, shared ancestry via a verifiable ForkRecord.
  • StandbyGroup — active-passive replication. One member processes events; standbys hold snapshots and catch up via sync_standbys(). On active failure the most-synced standby promotes and replays the events buffered since the last sync.
from net import (
    DaemonRuntime, ForkGroup, GroupError, ReplicaGroup, StandbyGroup,
    group_error_kind,
)

rt = DaemonRuntime(mesh)
rt.register_factory("counter", lambda: CounterDaemon())

# --- ReplicaGroup ----------------------------------------------------
replicas = ReplicaGroup.spawn(
    rt, "counter",
    replica_count=3,
    group_seed=bytes([0x11] * 32),
    lb_strategy="consistent-hash",   # or "round-robin" / "least-load"
                                     #    / "least-connections" / "random"
)
origin = replicas.route_event({"routing_key": "user:42"})
rt.deliver(origin, event)
replicas.scale_to(5)                 # grow
replicas.on_node_failure(failed_node_id)   # respawn elsewhere

# --- ForkGroup -------------------------------------------------------
forks = ForkGroup.fork(
    rt, "counter",
    parent_origin=0xABCDEF01,
    fork_seq=42,
    fork_count=3,
    lb_strategy="round-robin",
)
assert forks.verify_lineage()
for record in forks.fork_records():
    print(record["forked_origin"], record["fork_seq"])

# --- StandbyGroup ----------------------------------------------------
hot = StandbyGroup.spawn(
    rt, "counter",
    member_count=3,                  # 1 active + 2 standbys
    group_seed=bytes([0x77] * 32),
)
rt.deliver(hot.active_origin, event)
hot.sync_standbys()                  # periodic catchup
# On active-node failure:
# new_origin = hot.on_node_failure(failed_node_id)  # auto-promotes

Typed errors

Failures raise GroupError (a subclass of DaemonError). Use group_error_kind(e) to parse the discriminator from the Rust side's daemon: group: <kind>[: detail] message prefix:

from net import GroupError, group_error_kind

try:
    ReplicaGroup.spawn(rt, "never-registered",
                       replica_count=2, group_seed=bytes(32))
except GroupError as e:
    kind = group_error_kind(e)
    if kind == "not-ready":               ...  # runtime.start() didn't run
    elif kind == "factory-not-found":     ...  # kind wasn't registered
    elif kind == "no-healthy-member":     ...  # routed on an all-down group
    elif kind == "invalid-config":        ...  # e.g. replica_count == 0
    elif kind in ("placement-failed",
                  "registry-failed",
                  "daemon"):              ...  # core failure — read e args

Full staging, wire formats, and rationale: docs/SDK_GROUPS_SURFACE_PLAN.md. Core semantics live in the main README.md#daemons.

Performance Tips

  1. Use ingest_raw() for maximum throughput - Pass pre-serialized JSON strings
  2. Use ingest_raw_batch() for bulk operations - Reduces per-call overhead
  3. Increase ring_buffer_capacity - Larger buffers handle bursts better
  4. Match num_shards to CPU cores - Default is optimal for most cases

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai2070_net-0.10.0.tar.gz (2.4 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

ai2070_net-0.10.0-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl (1.8 MB view details)

Uploaded PyPymanylinux: glibc 2.28+ x86-64

ai2070_net-0.10.0-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded PyPymanylinux: glibc 2.28+ ARM64

ai2070_net-0.10.0-cp314-cp314t-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.14tmanylinux: glibc 2.28+ ARM64

ai2070_net-0.10.0-cp314-cp314-win_amd64.whl (1.6 MB view details)

Uploaded CPython 3.14Windows x86-64

ai2070_net-0.10.0-cp314-cp314-manylinux_2_28_x86_64.whl (1.9 MB view details)

Uploaded CPython 3.14manylinux: glibc 2.28+ x86-64

ai2070_net-0.10.0-cp314-cp314-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.14manylinux: glibc 2.28+ ARM64

ai2070_net-0.10.0-cp314-cp314-macosx_11_0_arm64.whl (1.6 MB view details)

Uploaded CPython 3.14macOS 11.0+ ARM64

ai2070_net-0.10.0-cp314-cp314-macosx_10_12_x86_64.whl (1.8 MB view details)

Uploaded CPython 3.14macOS 10.12+ x86-64

ai2070_net-0.10.0-cp313-cp313t-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.13tmanylinux: glibc 2.28+ ARM64

ai2070_net-0.10.0-cp313-cp313-win_amd64.whl (1.6 MB view details)

Uploaded CPython 3.13Windows x86-64

ai2070_net-0.10.0-cp313-cp313-manylinux_2_28_x86_64.whl (1.9 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ x86-64

ai2070_net-0.10.0-cp313-cp313-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ ARM64

ai2070_net-0.10.0-cp313-cp313-macosx_11_0_arm64.whl (1.6 MB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

ai2070_net-0.10.0-cp313-cp313-macosx_10_12_x86_64.whl (1.8 MB view details)

Uploaded CPython 3.13macOS 10.12+ x86-64

ai2070_net-0.10.0-cp312-cp312-win_amd64.whl (1.6 MB view details)

Uploaded CPython 3.12Windows x86-64

ai2070_net-0.10.0-cp312-cp312-manylinux_2_28_x86_64.whl (1.9 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ x86-64

ai2070_net-0.10.0-cp312-cp312-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ ARM64

ai2070_net-0.10.0-cp312-cp312-macosx_11_0_arm64.whl (1.6 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

ai2070_net-0.10.0-cp312-cp312-macosx_10_12_x86_64.whl (1.8 MB view details)

Uploaded CPython 3.12macOS 10.12+ x86-64

ai2070_net-0.10.0-cp311-cp311-win_amd64.whl (1.6 MB view details)

Uploaded CPython 3.11Windows x86-64

ai2070_net-0.10.0-cp311-cp311-manylinux_2_28_x86_64.whl (1.8 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ x86-64

ai2070_net-0.10.0-cp311-cp311-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ ARM64

ai2070_net-0.10.0-cp311-cp311-macosx_11_0_arm64.whl (1.6 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

ai2070_net-0.10.0-cp311-cp311-macosx_10_12_x86_64.whl (1.8 MB view details)

Uploaded CPython 3.11macOS 10.12+ x86-64

ai2070_net-0.10.0-cp310-cp310-win_amd64.whl (1.6 MB view details)

Uploaded CPython 3.10Windows x86-64

ai2070_net-0.10.0-cp310-cp310-manylinux_2_28_x86_64.whl (1.8 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ x86-64

ai2070_net-0.10.0-cp310-cp310-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ ARM64

File details

Details for the file ai2070_net-0.10.0.tar.gz.

File metadata

  • Download URL: ai2070_net-0.10.0.tar.gz
  • Upload date:
  • Size: 2.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for ai2070_net-0.10.0.tar.gz
Algorithm Hash digest
SHA256 a87a54140519b950a24eed3d4fa660f703c9984b1dda35fe588a7dab166cd4d3
MD5 96a4d22c8b7a569f541e105a2a17ff96
BLAKE2b-256 938e4797ccdc8907b198c91ecce368dd14ff9cdc138c9af391586d4e380c592b

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0.tar.gz:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 e6b6495619272f4fd108a4c76975c3fe5a1490c89c2acf5ad77e9dc95318f566
MD5 9c3b276a4291f17910884c73835a76c7
BLAKE2b-256 dbeeb2761234fdaab542c533113cd1a425f69617de58773f7097fe83119a2aba

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 947c777abd81aa2f71d6a0eca68a064aee9d59ad8c1061767ff4ed88070adaa6
MD5 7172a1f46089a8fc76a83032ae993ed8
BLAKE2b-256 4b2789c69349225a4f57e7176ff4c4409096d8763ac7a009ad85c17b9218ac05

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp314-cp314t-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp314-cp314t-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 091ce1ea9b167a7e2f3f9e2693dd4514f6098bf9c419ddd3058a0d6a8d353756
MD5 97dd7b936c2b96b9631828e679c24e82
BLAKE2b-256 2102eb7c1d56fef8c37046745f9f138a639d1cbab8b8e30ccff48604a1fe3013

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp314-cp314t-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp314-cp314-win_amd64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp314-cp314-win_amd64.whl
Algorithm Hash digest
SHA256 9181b70364fe95fb19ec385d2ceb6a91e1561ca38df831027cb8b9ff4d20403c
MD5 756209e05f0a50562ef0c8cefb575e13
BLAKE2b-256 a063d8c65ffdb6726310c352e83a77d01006d3e0291ba35d724def2bbd624caf

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp314-cp314-win_amd64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp314-cp314-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp314-cp314-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 7dc6a7dbd6a53e87e2eb74ab72cf13efc9678576d04c0921909b9a7dd8a2a113
MD5 7a94edd638b6b0ab2ce62ac4d482d44b
BLAKE2b-256 6318f36f4100790bacf21f7fb8a5bf05a85a923c2e1213c435fed126b2006352

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp314-cp314-manylinux_2_28_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp314-cp314-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp314-cp314-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 396a16e0a0a0b89fb3638d0c80cc3a40151c0467e7307437bd06fd6aa2b331ce
MD5 9e3434cda71c9dbb1310c160cfc85b92
BLAKE2b-256 1143cb2d86e4b60405d8013e58b85489756f0eb39aa1fe6309b50f3787f17907

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp314-cp314-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp314-cp314-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp314-cp314-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 9ff537218a01cce4f2db396672e48a7adb1b6e6103d3cfcafce96a51dc512993
MD5 c42d737b2db7f3546569fbfc94717f57
BLAKE2b-256 3fdc7e76296d12c8a4b7cf4665247752d42ff2aa6d53605463e1028634f965be

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp314-cp314-macosx_11_0_arm64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp314-cp314-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp314-cp314-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 229133b2b20193a4fe6ec1c00c16a82f7f5aade0c3d3673c0065cc0262d7ae45
MD5 c42f254970a6a6eaa6ac421453cd874b
BLAKE2b-256 86b87d0a8ff1444b473dca9a1622f5974f4f578bc70af3df559dd89605482b23

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp314-cp314-macosx_10_12_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp313-cp313t-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp313-cp313t-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 0f3c71dde8077e953d61564110e9e2dfb06ff5a02d99dee038c79120d35ce8bd
MD5 9b16038c21d9e93394e92485e5c93496
BLAKE2b-256 8325f798c0160aa090bc4b76da09e51bd71325ba688d264f7769b95fb242a138

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp313-cp313t-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp313-cp313-win_amd64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 e20a07f06c30bb82dd3079deef5f06535adb739cbeea5aaa834f62bd608833e8
MD5 1ee1aaccd529c587c8536847737e37e2
BLAKE2b-256 50e03f860c05e367f4a30abfc34a0b10e3709d11219d308c273cdc35d32fdb4e

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp313-cp313-win_amd64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp313-cp313-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp313-cp313-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 8daf3fbd3882846c88d7a4cc7f366994bc1e89fb1499004276ff7384db1dec2d
MD5 5da598bca30c365fbf79250928995433
BLAKE2b-256 c6f13c09d071af04fd984065070fc3028857ff74d5f74fe3dd8c499d176d4706

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp313-cp313-manylinux_2_28_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp313-cp313-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp313-cp313-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 8ed98a5bba2e0752ea25baa05a1c79fa30d2ae6b3d5f1a68711ed515319b4993
MD5 c4fd64333204644c325e01f86a509b22
BLAKE2b-256 5defdd534c37969ef50238237c68b36072c9f2af73b205c6d5f944dff0974dba

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp313-cp313-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 e36035c66fd78ee309930002782005b253fed5e3f8d6534e0708c54eaadac515
MD5 b8244b574f88bc7e019b579ac0c00471
BLAKE2b-256 174fd2987ab37409927fe2355ae18fcd8baa5c83832a682ece2e9014c3fe7eca

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp313-cp313-macosx_11_0_arm64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp313-cp313-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp313-cp313-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 aee4877120f72168883bf00fb8069da9489e7199476f264497e181643ce8fa0a
MD5 eb2a2aeaa5687abc86b47b2296418b6e
BLAKE2b-256 bd7427aa1fbb2aaaf62c4326acff5585850469ab525f6153570a0fc12cea0dc0

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp313-cp313-macosx_10_12_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 83c5d05cc47cec72651af2179d8e1559bd7270111fd64867b84c5a6d3e2f5c28
MD5 b34136fe5bc627503cfd7cdeeeeb5040
BLAKE2b-256 3151d05705b8f6c3b98274cd207e0537c526b4f079d8a9da149e5b9ebfedc4e6

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp312-cp312-win_amd64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp312-cp312-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp312-cp312-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 a15bfd03893bde638ad858b7eb5bd6ff4cc8e14d58d0fa88ebd3b9ad20fab518
MD5 2e39248f5b6a32e8c36e6231595da305
BLAKE2b-256 5a3dad535a594df547064f9063f43c12f41e295a8e705332482afdc4ffd8113e

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp312-cp312-manylinux_2_28_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp312-cp312-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp312-cp312-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 bbd695af50d92ef3862011634cfd733c2cebd22c28b80c51719d74539774292c
MD5 5bfe37b1703d07b26299c538fc6b1cae
BLAKE2b-256 912fc74f2ae98d1b6b14c645b4c4918f0f042a757c382c48af4fe65fdf69ed9f

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp312-cp312-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 326b35b7341675c3b2510fc53457630d5154b101b354761380ff20fba48b7fab
MD5 77c9c8c924ead5239338993e77e31d45
BLAKE2b-256 4fc9e40d1b84cc0591fb3424c9b3ef2dbdd9d9a86b37fddf267bb887ebb50869

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp312-cp312-macosx_11_0_arm64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp312-cp312-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp312-cp312-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 1f1b4ff2250aae5b5b346550da094e55160c7fa42cba4000187296ea11417413
MD5 a7c6def00c6f0b53790f433d47ae21a3
BLAKE2b-256 1c33a6b7737f5c2fdec15bdf5e3189378582601a25dfbd548bf260f15a88c4c6

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp312-cp312-macosx_10_12_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 f057d5d8b3c51e38e1eaa74bacf7eaa7cecfaceb262de224d3a6d155c1c33253
MD5 9527bfcd8665094d0ec0543c6a9afe04
BLAKE2b-256 633a3201d1e617bd9b2342f54f6c14f7e884c6e3a197846a4be84efcb4efa69e

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp311-cp311-win_amd64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp311-cp311-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp311-cp311-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 8dc8ec0d24121ed143f9c8061bd37b9b916dcfe4c8a3bfca666f88517658517c
MD5 ca26ba5c19d0c57316c063ca0287dd76
BLAKE2b-256 d337f235147d7a4ca28b424347b6f0e09df3c1fbf85c4241ff4c936760563d91

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp311-cp311-manylinux_2_28_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp311-cp311-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp311-cp311-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 0346579916f69f17796190fed15d5f21ce07f827cdcd4cf94389c4150c21615f
MD5 d831dd082dec7c573eeb09cfc5a5ae92
BLAKE2b-256 a98b4642f3cbfd1dbeaeddabf26125b57f2aee50720962c5ba5150c92575734f

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp311-cp311-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 2d75a1ea7c6ba9126f32ee3500a733907d6a36f0eb2983f7ec962b4b2909f747
MD5 d0b86e0f1357b3d8f514bb1c379495b2
BLAKE2b-256 9d06677af21209b84b48af087a528bdda9dc39d855208b56e103fb2d6f14a95e

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp311-cp311-macosx_11_0_arm64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp311-cp311-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp311-cp311-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 94b8ff7af1997a0b643fa22858a71cb0afba561ceaa38e1d526dfa777d14aab6
MD5 1ad31f5771517e529264cbf1971c90bb
BLAKE2b-256 0cae54fe5acb4d50957305d97ee3d68be975508317ea2a67c2c42fc70c3f0a8b

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp311-cp311-macosx_10_12_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 d40551f404ac145be77828a35ac28d3f0ab2d49a16d51384a0f9439a71db9687
MD5 212d0a454e7f7fb75afd8ad52153532f
BLAKE2b-256 ea3cf30c351a340a3801a8d134e621f23bce0f0c8fba915d59202721d6150ff6

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp310-cp310-win_amd64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp310-cp310-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp310-cp310-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 6902e7f6cd9497bd4fd27123fa26b4d63b6c65ce8e9ae01b3ed576c4400ddc47
MD5 b974213e9e065c5d71b8dab0006c169d
BLAKE2b-256 e69c7624cd07e39ab6e81a42dc383cd97348cf26e1d4e8062f731fa0827112b5

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp310-cp310-manylinux_2_28_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.10.0-cp310-cp310-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.10.0-cp310-cp310-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 5b35da5767023a64b6342a3e9539b42da6d9c010227151ccd1bdeaf9c21dcb3c
MD5 4672316b663424fa56f2f09cb7e6b565
BLAKE2b-256 844aa43ebf27aeb5fb1a7684cb664814ad9c0c0a2cac404c6bedaf389a1f65de

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.10.0-cp310-cp310-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page