Skip to main content

High-performance, schema-agnostic event bus for AI runtime workloads

Project description

Net Python

High-performance, schema-agnostic event bus for AI runtime workloads.

Installation

pip install ai2070-net

The package publishes as ai2070-net on PyPI but imports as from net import ... (the in-source module name is preserved). For the higher-level Pythonic surface (generators, typed channels, dataclass/Pydantic support), install ai2070-net-sdk instead — it depends on this package.

Quick Start

from net import Net

# Create event bus (defaults to CPU core count shards)
bus = Net()

# Ingest events - fast path with raw JSON strings (23M+ ops/sec)
bus.ingest_raw('{"token": "hello", "index": 0}')

# Or use dict for convenience (4M+ ops/sec)
bus.ingest({"token": "world", "index": 1})

# Batch ingestion for maximum throughput
events = [f'{{"token": "tok_{i}"}}' for i in range(10000)]
count = bus.ingest_raw_batch(events)

# Poll events
response = bus.poll(limit=100)
for event in response:
    print(event.raw)
    # Or parse to dict
    data = event.parse()

# Check stats
stats = bus.stats()
print(f"Ingested: {stats.events_ingested}, Dropped: {stats.events_dropped}")

# Shutdown
bus.shutdown()

Context Manager

with Net(num_shards=4) as bus:
    bus.ingest_raw('{"data": "value"}')
# Automatically shuts down

Configuration

bus = Net(
    num_shards=8,                    # Number of parallel shards
    ring_buffer_capacity=1_048_576,  # Events per shard (must be power of 2)
    backpressure_mode="drop_oldest", # What to do when full
)

Net Encrypted UDP Transport

Net provides encrypted point-to-point UDP transport for high-performance scenarios:

from net import Net, generate_net_keypair
import os

# Generate keypair for responder
keypair = generate_net_keypair()
psk = os.urandom(32).hex()

# Responder side
responder = Net(
    num_shards=2,
    net_bind_addr='127.0.0.1:9001',
    net_peer_addr='127.0.0.1:9000',
    net_psk=psk,
    net_role='responder',
    net_secret_key=keypair.secret_key,
    net_public_key=keypair.public_key,
    net_reliability='light',  # 'none', 'light', or 'full'
)

# Initiator side (knows responder's public key)
initiator = Net(
    num_shards=2,
    net_bind_addr='127.0.0.1:9000',
    net_peer_addr='127.0.0.1:9001',
    net_psk=psk,
    net_role='initiator',
    net_peer_public_key=keypair.public_key,
)

# Use as normal
initiator.ingest_raw('{"event": "data"}')

Backpressure Modes

  • "drop_newest" - Reject new events when buffer is full
  • "drop_oldest" - Evict oldest events to make room
  • "fail_producer" - Raise an error

NAT traversal (optimization, not correctness)

Two NATed peers already reach each other through the mesh's routed-handshake path. NAT traversal opens a shorter direct path when the NAT shape allows it; it's never required for connectivity. Every method below is safe to call regardless of NAT type — a failed punch or a traversal: * RuntimeError is not a connectivity failure, traffic keeps riding the relay. The whole surface is a no-op when the native module was built without --features nat-traversal: every call raises RuntimeError("traversal: unsupported").

from net import NetMesh

mesh = NetMesh(bind_addr="0.0.0.0:9000", psk="00" * 32)

mesh.reclassify_nat()

klass  = mesh.nat_type()            # "open" | "cone" | "symmetric" | "unknown"
reflex = mesh.reflex_addr()         # "203.0.113.5:9001" or None

observed = mesh.probe_reflex(peer_node_id)   # "ip:port"

# Attempt a direct connection via the pair-type matrix.
# `coordinator` mediates the punch when the matrix picks one.
# Always returns — inspect stats to learn which path won.
mesh.connect_direct(peer_node_id, peer_pubkey_hex, coordinator_node_id)

# Cumulative counters — all int, monotonic.
s = mesh.traversal_stats()
s.punches_attempted   # coordinator mediated a PunchRequest + Introduce
s.punches_succeeded   # ack arrived AND direct handshake landed
s.relay_fallbacks     # landed on the routed path after skip/fail

Operators with a known-public address skip the classifier sweep entirely. The override pins "open" + the supplied address on every capability announcement; call announce_capabilities() after to propagate (the setter resets the rate-limit floor so the next announce is guaranteed to broadcast).

mesh.set_reflex_override('203.0.113.5:9001')
mesh.announce_capabilities(caps)
# later:
mesh.clear_reflex_override()
mesh.announce_capabilities(caps)

Traversal failures surface as RuntimeError with a stable traversal: <kind>[: <detail>] message prefix. The <kind> discriminator is one of reflex-timeout | peer-not-reachable | transport | rendezvous-no-relay | rendezvous-rejected | punch-failed | port-map-unavailable | unsupported. Match on the prefix for machine-readable branching:

try:
    mesh.connect_direct(peer_node_id, peer_pubkey_hex, coord_id)
except RuntimeError as e:
    msg = str(e)
    if msg.startswith("traversal: unsupported"):
        ...   # native module built without --features nat-traversal
    elif msg.startswith("traversal: peer-not-reachable"):
        ...

"unsupported" is the signal that the bindings are linked unconditionally and the native module doesn't have the feature — callers can branch cleanly without probing for symbol presence.

Channels (distributed pub/sub)

Named pub/sub over the encrypted mesh. Publishers register channels with access policy; subscribers ask to join via a membership subprotocol; publish fans payloads out to every current subscriber.

from net import NetMesh, ChannelAuthError, ChannelError

pub = NetMesh('127.0.0.1:9001', '42' * 32)
try:
    pub.register_channel(
        'sensors/temp',
        visibility='global',      # or 'subnet-local' | 'parent-visible' | 'exported'
        reliable=True,
        priority=2,
        max_rate_pps=1000,
    )

    # Subscriber side (after handshake with pub):
    # sub.subscribe_channel(pub.node_id, 'sensors/temp')

    # Fan a payload out to all subscribers.
    report = pub.publish(
        'sensors/temp',
        b'{"celsius": 22.5}',
        reliability='reliable',
        on_failure='best_effort',
        max_inflight=32,
    )
    print(f"{report['delivered']}/{report['attempted']} subscribers received")
finally:
    pub.shutdown()

# Typed errors for ACL outcomes:
# try: sub.subscribe_channel(peer_id, 'restricted')
# except ChannelAuthError: ...   # publisher denied
# except ChannelError: ...       # unknown channel / other rejection

Channel names always cross the binding as strings (not the u16 hash) to avoid ACL bypass via collision. The Python binding does not yet expose a dedicated per-channel receive API; that is a follow-up.

CortEX & NetDb (event-sourced state)

Typed, event-sourced state on top of RedEX — tasks and memories with filterable queries and sync watch iterators. Includes the snapshot_and_watch primitive whose race fix landed on v2, so you can safely "paint what's there now, then react to changes" without losing updates that race during construction.

from net import NetDb, CortexError

db = NetDb.open(origin_hash=0xABCDEF01, with_tasks=True, with_memories=True)
tasks = db.tasks

try:
    seq = tasks.create(1, 'write docs', 100)
    tasks.wait_for_seq(seq)   # block until the fold has applied
except CortexError as e:
    # adapter-level failure (RedEX I/O, fold halted, etc.)
    ...

# Snapshot + watch, one atomic call — no race.
snap, it = tasks.snapshot_and_watch_tasks(status='pending')
print('initial:', len(snap), 'pending tasks')
for batch in it:
    print('update:', len(batch), 'pending tasks')
    if len(batch) == 0:
        it.close()    # idempotent; ends the iterator
        break

db.close()

Standalone adapters

If you only need one model, skip the NetDb facade:

from net import Redex, TasksAdapter

redex = Redex(persistent_dir='/var/lib/net/redex')
tasks = TasksAdapter.open(redex, origin_hash=0xABCDEF01, persistent=True)

MemoriesAdapter exposes the same shape with store / retag / pin / unpin / delete / list_memories / watch_memories / snapshot_and_watch_memories.

Raw RedEX file (no CortEX fold)

For domain-agnostic persistent logs — your own event schema, no fold, no typed adapter — open a RedexFile directly from a Redex. The tail is a sync Python iterator; call close() or let StopIteration fire when the file closes.

from net import Redex, RedexError

redex = Redex(persistent_dir='/var/lib/net/events')
file = redex.open_file(
    'analytics/clicks',
    persistent=True,
    fsync_interval_ms=100,           # or fsync_every_n=1000
    retention_max_events=1_000_000,
)

# Append (or batch-append).
seq = file.append(b'{"url": "/home"}')
# `append_batch` returns the first-seq int of the batch, or `None`
# for an empty input. The `None` return is the explicit "I
# appended nothing" signal — pre-`bugfixes-8` it returned `0`,
# which collided with the legitimate "first event of a non-empty
# batch landed at seq 0" return.
first = file.append_batch([b'{"a": 1}', b'{"a": 2}'])

# Tail — backfills the retained range, then streams live appends.
try:
    for event in file.tail(from_seq=0):
        print(event.seq, bytes(event.payload))
        if should_stop:
            break           # idempotent; ends the iterator via close()
except RedexError as e:
    ...

file.close()

Errors from the RedEX surface raise RedexError (invalid channel name, bad config, append / tail / sync / close failures).

Why snapshot_and_watch_*?

Calling list_tasks() then watch_tasks() takes two independent state reads. A mutation landing between them would be silently lost under the old skip(1) implementation. The atomic primitive returns the snapshot and an iterator seeded so that any divergent initial emission is forwarded through instead of dropped — see docs/STORAGE_AND_CORTEX.md.

Redis Streams consumer-side dedup helper

The Net Redis adapter writes a stable dedup_id field on every XADD entry: {producer_nonce:hex}:{shard_id}:{sequence_start}:{i}. Combined with the bus's persistent producer-nonce path (producer_nonce_path on EventBusConfig), the id is stable across both within-process retries AND cross-process restart — the MULTI/EXEC-timeout race becomes filterable at consume time.

RedisStreamDedup is the consumer-side helper, exposed on the net PyO3 module:

from net import RedisStreamDedup
import redis

# ~10k events/sec * 1 min dedup window → ~600,000.
dedup = RedisStreamDedup(capacity=600_000)

r = redis.Redis(host="localhost", port=6379)
cursor = "0"
while True:
    # XRANGE bounds are INCLUSIVE on both ends. After the first
    # page we must use the exclusive form `(<id>` so we don't
    # re-read the entry the cursor points at — a vanilla
    # `min=cursor` loop spins forever once the cursor reaches the
    # tail and the same entry is returned every iteration.
    start = cursor if cursor == "0" else f"({cursor}"
    entries = r.xrange("net:shard:0", min=start, max="+", count=100)
    for entry_id, fields in entries:
        dedup_id = fields.get(b"dedup_id", b"").decode()
        if not dedup_id:
            # No dedup_id → older entry or non-Net producer; skip
            # dedup and process as-is.
            process(entry_id, fields)
            continue
        if not dedup.is_duplicate(dedup_id):
            process(entry_id, fields)
        cursor = entry_id.decode()
    if not entries:
        break

Surface:

dedup = RedisStreamDedup()                # default capacity 4096
dedup = RedisStreamDedup(capacity=N)      # explicit; 0 → 1
dedup.is_duplicate(id: str) -> bool       # test-and-insert
dedup.len                                 # property — tracked-id count
dedup.capacity                            # property — configured cap
dedup.is_empty                            # property
dedup.clear()                             # reset (e.g. on consumer-group rebalance)

The helper is transport-agnostic — bring your own redis-py / aioredis / equivalent client; it just answers the dedup question against an in-memory LRU. Concurrency: each handle wraps a Rust Mutex<RedisStreamDedup>, so concurrent calls from multiple Python threads are safe but serialize. Production-shape is one helper per consumer thread.

Security Surface (Stage A–E)

The mesh layer surfaces the same identity / capabilities / subnets / channel-auth story that the Rust SDK and the TypeScript / Node SDKs ship. Full staging and rationale: docs/SDK_SECURITY_SURFACE_PLAN.md. Python-binding parity details: docs/SDK_PYTHON_PARITY_PLAN.md.

Identity + permission tokens

Every node has an ed25519 identity; permission tokens are ed25519- signed delegations that authorize a subject to publish / subscribe / delegate / admin on a channel, optionally with further delegation depth.

from net import Identity, parse_token, verify_token, delegate_token

alice = Identity.generate()
bob = Identity.generate()

# Alice issues Bob a subscribe+delegate token good for 5 min, with
# one re-delegation hop remaining. `ttl_seconds=0` raises
# `TokenError` — a zero TTL would mint a born-expired token that
# every receiver would reject as `Expired`, leaving the issuer to
# diagnose the misuse from receiver-side log lines.
token = alice.issue_token(
    subject=bob.entity_id,
    scope=["subscribe", "delegate"],
    channel="sensors/temp",
    ttl_seconds=300,
    delegation_depth=1,
)
assert verify_token(token) is True

# Bob re-delegates to Carol; depth drops to 0 (leaf).
carol = Identity.generate()
child = delegate_token(bob, token, carol.entity_id, ["subscribe"])
assert parse_token(child)["delegation_depth"] == 0

Capability announcements + peer discovery

Announce hardware / software / model / tool / tag fingerprints, then query the local capability index with a filter.

mesh.announce_capabilities({
    "hardware": {
        "cpu_cores": 16,
        "memory_mb": 65536,
        "gpu": {"vendor": "nvidia", "model": "h100", "vram_mb": 81920},
    },
    "models": [{"model_id": "llama-3.1-70b", "family": "llama",
                "context_length": 128_000}],
    "tags": ["gpu", "prod"],
})

gpu_peers = mesh.find_nodes({
    "require_gpu": True,
    "gpu_vendor": "nvidia",
    "min_vram_mb": 40_000,
})

Scoped discovery (reserved scope:* tags)

A provider can narrow who its query result reaches by tagging its CapabilitySet with reserved scope:* tags. Queries call mesh.find_nodes_scoped(filter, scope) to filter candidates. The wire format and forwarders are untouched — enforcement is purely query-side.

# GPU pool advertised to one tenant only.
mesh.announce_capabilities({
    "tags": ["model:llama3-70b", "scope:tenant:oem-123"],
})

# Tenant-scoped query — returns this node + any Global (untagged) peers.
oem_nodes = mesh.find_nodes_scoped(
    {"require_tags": ["model:llama3-70b"]},
    {"kind": "tenant", "tenant": "oem-123"},
)

Accepted scope dict shapes: {"kind": "any"} (default), {"kind": "global_only"}, {"kind": "same_subnet"}, {"kind": "tenant", "tenant": "<id>"}, {"kind": "tenants", "tenants": [...]}, {"kind": "region", "region": "<name>"}, {"kind": "regions", "regions": [...]}. Reserved announcement tags: scope:subnet-local (visible only under same_subnet), scope:tenant:<id>, scope:region:<name> — strictest scope wins. Untagged peers resolve to Global and stay visible under permissive queries. Full design: docs/SCOPED_CAPABILITIES_PLAN.md.

Capability propagation is multi-hop, bounded by MAX_CAPABILITY_HOPS = 16 with (origin, version) dedup on every forwarder. capability_gc_interval_ms controls both the index TTL sweep and the dedup cache eviction. See docs/MULTIHOP_CAPABILITY_PLAN.md.

Subnets

Nodes can bind to a hierarchical SubnetId (1–4 levels, each 0–255) directly, or derive one from announced tags via a SubnetPolicy.

# Explicit subnet.
mesh = NetMesh("127.0.0.1:9000", PSK, subnet=[3, 7, 2])

# Or derive from tags.
mesh = NetMesh(
    "127.0.0.1:9001", PSK,
    subnet_policy={
        "rules": [
            {"tag_prefix": "region:", "level": 0,
             "values": {"eu": 1, "us": 2, "apac": 3}},
            {"tag_prefix": "zone:", "level": 1,
             "values": {"a": 1, "b": 2, "c": 3}},
        ]
    },
)

Channel authentication

Publishers set publish_caps / subscribe_caps / require_token on register_channel. Subscribers present a PermissionToken via the optional token=bytes kwarg on subscribe_channel.

mesh.register_channel(
    "gpu/jobs",
    subscribe_caps={"require_gpu": True, "min_vram_mb": 16_000},
    require_token=True,
)

# Subscriber side, with a token issued by the publisher:
mesh.subscribe_channel(publisher_node_id, "gpu/jobs", token=token_bytes)

Denied subscribes raise ChannelAuthError (a subclass of ChannelError); malformed tokens raise TokenError whose message has the form "token: <kind>" (invalid_signature, expired, delegation_exhausted, …). Successful subscribes populate an AuthGuard bloom filter on the publisher so every subsequent publish admits the subscriber in constant time. An expiry sweep (default 30 s) evicts subscribers whose tokens age out; a per- peer auth-failure rate limiter throttles bad-token storms. Cross- SDK behaviour is fixed by the Rust integration suite; see tests/channel_auth.rs and tests/channel_auth_hardening.rs.

Compute (daemons + migration)

Run MeshDaemons directly from Python. DaemonRuntime owns the factory table, the per-daemon hosts, and the Registering → Ready → ShuttingDown lifecycle gate that decides when inbound migrations may land. Daemons are any Python object whose process(event) returns a list of bytes/bytearray payloads — the runtime wraps each output in a causal link and forwards it.

Build the native module with the compute feature (maturin picks it up on the default build) and import from net. Full design notes: docs/SDK_COMPUTE_SURFACE_PLAN.md.

from net import DaemonRuntime, NetMesh, Identity, CausalEvent


class EchoDaemon:
    """Stateless echo — ships every event's payload straight back."""

    name = "echo"

    def process(self, event: CausalEvent) -> list[bytes]:
        return [bytes(event.payload)]

    # Optional: snapshot() / restore(state) for migration-capable daemons.


mesh = NetMesh("127.0.0.1:9000", "42" * 32)
rt = DaemonRuntime(mesh)

# 1. Register factories BEFORE flipping the runtime to Ready.
rt.register_factory("echo", lambda: EchoDaemon())

# 2. Ready the runtime — after this point spawn / migration accept.
rt.start()

# 3. Spawn a daemon; Identity pins the ed25519 keypair so
#    origin_hash / entity_id stay stable across migrations.
identity = Identity.generate()
handle = rt.spawn("echo", identity)
print(f"origin = 0x{handle.origin_hash:08x}")

# 4. Manually feed an event for testing; real delivery happens
#    via the mesh's causal chain.
event = CausalEvent(handle.origin_hash, sequence=1, payload=b"hello")
outputs = rt.deliver(handle.origin_hash, event)

# 5. Clean shutdown.
rt.stop(handle.origin_hash)
rt.shutdown()

process must be synchronous — the core's contract is sync, and the PyO3 bridge re-attaches the GIL for the duration of each call. Raising inside process surfaces as DaemonError on the caller.

Migration

start_migration(origin_hash, source_node, target_node) orchestrates the six-phase cutover (Snapshot → Transfer → Restore → Replay → Cutover → Complete). The source seals the daemon's ed25519 seed into the outbound snapshot using the target's X25519 static pubkey; the target rebuilds the daemon via the factory registered under the same kind, replays any events that arrived during transfer, then activates.

from net import MigrationError, migration_error_kind

try:
    mig = rt.start_migration(handle.origin_hash, src_node_id, dst_node_id)
    # mig.phase  — "snapshot" | "transfer" | "restore" | ...
    # mig.source_node / mig.target_node
    mig.wait()                      # blocks to completion
except MigrationError as e:
    kind = migration_error_kind(e)
    if kind == "not-ready":               ...  # target start() didn't run
    elif kind == "factory-not-found":     ...  # target missing this kind
    elif kind == "compute-not-supported": ...  # target has no DaemonRuntime
    elif kind == "state-failed":          ...  # snapshot / restore threw
    elif kind == "identity-transport-failed": ...  # seal / unseal failed
    # ...see SDK_COMPUTE_SURFACE_PLAN.md for the full enum

start_migration_with(origin, src, dst, opts) exposes options such as seal_seed=False for test scenarios. On the target node, call rt.register_migration_target_identity(kind, identity) before any migration of that kind lands; without it the runtime rejects sealed-seed envelopes with migration_error_kind == "identity-transport-failed".

Surface at a glance

Method Description
DaemonRuntime(mesh) Construct against an existing NetMesh
rt.register_factory(kind, fn) Install a factory (before start())
rt.start() / rt.shutdown() Flip the lifecycle gate
rt.spawn(kind, identity, config=None) Spawn a local daemon
rt.spawn_from_snapshot(kind, identity, bytes, config=None) Rehydrate
rt.stop(origin) Stop a local daemon
rt.snapshot(origin) Capture bytes for persistence / migration
rt.deliver(origin, event) Feed an event (returns list[bytes])
rt.start_migration(origin, src, dst) Orchestrate a live migration
rt.register_migration_target_identity(kind, id) Pin unseal keypair on target for kind
handle.origin_hash / entity_id / stats() Per-daemon identity + stats
DaemonError / MigrationError Typed exceptions; migration_error_kind(e) parses e.kind

Compute Groups (Replica / Fork / Standby)

HA / scaling overlays on top of DaemonRuntime. Build the native module with the groups feature (implies compute) to expose ReplicaGroup, ForkGroup, StandbyGroup, and the GroupError exception class.

  • ReplicaGroup — N interchangeable copies of a daemon. Deterministic identity from group_seed + index, so a replacement respawned on another node has a stable origin_hash. Load-balances inbound events across healthy members; auto-replaces on node failure.
  • ForkGroup — N independent daemons forked from a common parent at fork_seq. Unique keypairs, shared ancestry via a verifiable ForkRecord.
  • StandbyGroup — active-passive replication. One member processes events; standbys hold snapshots and catch up via sync_standbys(). On active failure the most-synced standby promotes and replays the events buffered since the last sync.
from net import (
    DaemonRuntime, ForkGroup, GroupError, ReplicaGroup, StandbyGroup,
    group_error_kind,
)

rt = DaemonRuntime(mesh)
rt.register_factory("counter", lambda: CounterDaemon())

# --- ReplicaGroup ----------------------------------------------------
replicas = ReplicaGroup.spawn(
    rt, "counter",
    replica_count=3,
    group_seed=bytes([0x11] * 32),
    lb_strategy="consistent-hash",   # or "round-robin" / "least-load"
                                     #    / "least-connections" / "random"
)
origin = replicas.route_event({"routing_key": "user:42"})
rt.deliver(origin, event)
replicas.scale_to(5)                 # grow
replicas.on_node_failure(failed_node_id)   # respawn elsewhere

# --- ForkGroup -------------------------------------------------------
forks = ForkGroup.fork(
    rt, "counter",
    parent_origin=0xABCDEF01,
    fork_seq=42,
    fork_count=3,
    lb_strategy="round-robin",
)
assert forks.verify_lineage()
for record in forks.fork_records():
    print(record["forked_origin"], record["fork_seq"])

# --- StandbyGroup ----------------------------------------------------
hot = StandbyGroup.spawn(
    rt, "counter",
    member_count=3,                  # 1 active + 2 standbys
    group_seed=bytes([0x77] * 32),
)
rt.deliver(hot.active_origin, event)
hot.sync_standbys()                  # periodic catchup
# On active-node failure:
# new_origin = hot.on_node_failure(failed_node_id)  # auto-promotes

Typed errors

Failures raise GroupError (a subclass of DaemonError). Use group_error_kind(e) to parse the discriminator from the Rust side's daemon: group: <kind>[: detail] message prefix:

from net import GroupError, group_error_kind

try:
    ReplicaGroup.spawn(rt, "never-registered",
                       replica_count=2, group_seed=bytes(32))
except GroupError as e:
    kind = group_error_kind(e)
    if kind == "not-ready":               ...  # runtime.start() didn't run
    elif kind == "factory-not-found":     ...  # kind wasn't registered
    elif kind == "no-healthy-member":     ...  # routed on an all-down group
    elif kind == "invalid-config":        ...  # e.g. replica_count == 0
    elif kind in ("placement-failed",
                  "registry-failed",
                  "daemon"):              ...  # core failure — read e args

Full staging, wire formats, and rationale: docs/SDK_GROUPS_SURFACE_PLAN.md. Core semantics live in the main README.md#daemons.

Performance Tips

  1. Use ingest_raw() for maximum throughput - Pass pre-serialized JSON strings
  2. Use ingest_raw_batch() for bulk operations - Reduces per-call overhead
  3. Increase ring_buffer_capacity - Larger buffers handle bursts better
  4. Match num_shards to CPU cores - Default is optimal for most cases

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai2070_net-0.9.0.tar.gz (2.2 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

ai2070_net-0.9.0-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl (1.8 MB view details)

Uploaded PyPymanylinux: glibc 2.28+ x86-64

ai2070_net-0.9.0-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded PyPymanylinux: glibc 2.28+ ARM64

ai2070_net-0.9.0-cp314-cp314t-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.14tmanylinux: glibc 2.28+ ARM64

ai2070_net-0.9.0-cp314-cp314-win_amd64.whl (1.5 MB view details)

Uploaded CPython 3.14Windows x86-64

ai2070_net-0.9.0-cp314-cp314-manylinux_2_28_x86_64.whl (1.8 MB view details)

Uploaded CPython 3.14manylinux: glibc 2.28+ x86-64

ai2070_net-0.9.0-cp314-cp314-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.14manylinux: glibc 2.28+ ARM64

ai2070_net-0.9.0-cp314-cp314-macosx_11_0_arm64.whl (1.6 MB view details)

Uploaded CPython 3.14macOS 11.0+ ARM64

ai2070_net-0.9.0-cp314-cp314-macosx_10_12_x86_64.whl (1.7 MB view details)

Uploaded CPython 3.14macOS 10.12+ x86-64

ai2070_net-0.9.0-cp313-cp313t-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.13tmanylinux: glibc 2.28+ ARM64

ai2070_net-0.9.0-cp313-cp313-win_amd64.whl (1.6 MB view details)

Uploaded CPython 3.13Windows x86-64

ai2070_net-0.9.0-cp313-cp313-manylinux_2_28_x86_64.whl (1.8 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ x86-64

ai2070_net-0.9.0-cp313-cp313-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ ARM64

ai2070_net-0.9.0-cp313-cp313-macosx_11_0_arm64.whl (1.6 MB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

ai2070_net-0.9.0-cp313-cp313-macosx_10_12_x86_64.whl (1.7 MB view details)

Uploaded CPython 3.13macOS 10.12+ x86-64

ai2070_net-0.9.0-cp312-cp312-win_amd64.whl (1.6 MB view details)

Uploaded CPython 3.12Windows x86-64

ai2070_net-0.9.0-cp312-cp312-manylinux_2_28_x86_64.whl (1.8 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ x86-64

ai2070_net-0.9.0-cp312-cp312-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ ARM64

ai2070_net-0.9.0-cp312-cp312-macosx_11_0_arm64.whl (1.6 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

ai2070_net-0.9.0-cp312-cp312-macosx_10_12_x86_64.whl (1.7 MB view details)

Uploaded CPython 3.12macOS 10.12+ x86-64

ai2070_net-0.9.0-cp311-cp311-win_amd64.whl (1.5 MB view details)

Uploaded CPython 3.11Windows x86-64

ai2070_net-0.9.0-cp311-cp311-manylinux_2_28_x86_64.whl (1.8 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ x86-64

ai2070_net-0.9.0-cp311-cp311-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ ARM64

ai2070_net-0.9.0-cp311-cp311-macosx_11_0_arm64.whl (1.6 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

ai2070_net-0.9.0-cp311-cp311-macosx_10_12_x86_64.whl (1.7 MB view details)

Uploaded CPython 3.11macOS 10.12+ x86-64

ai2070_net-0.9.0-cp310-cp310-win_amd64.whl (1.5 MB view details)

Uploaded CPython 3.10Windows x86-64

ai2070_net-0.9.0-cp310-cp310-manylinux_2_28_x86_64.whl (1.8 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ x86-64

ai2070_net-0.9.0-cp310-cp310-manylinux_2_28_aarch64.whl (1.7 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ ARM64

File details

Details for the file ai2070_net-0.9.0.tar.gz.

File metadata

  • Download URL: ai2070_net-0.9.0.tar.gz
  • Upload date:
  • Size: 2.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for ai2070_net-0.9.0.tar.gz
Algorithm Hash digest
SHA256 5897be91f5366aa4e8c9b6a5b7094aaa868d974a97b28160df4cb5438a4a3e49
MD5 0232a3961babb47c3d3d4aa7d0baaed8
BLAKE2b-256 52ef94eec37ac6c50d1a975b99e86437fb46c5260591b7ca67606f88ee0bb845

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0.tar.gz:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 db83bb48a64bf90476e7a8c604693b741103ba3b750f9d504176fbd45fc26b4f
MD5 933b7f0ab6f397cfcbebeb6f96490f78
BLAKE2b-256 162a9e9aa473bca512b4c02e2d4f037a628baa1f94f7c4c2758e3eef0518ffef

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 0312563f90b8898afdb6becbcf2dfb9a92f6c6e2afe97e7cf5856fcf7d71f081
MD5 3a95dcc3e2bf5ff91fcdd7683c28043f
BLAKE2b-256 62b0b63738d3f6b0756abae9acabcef2d2183d022b6be0ab8c81e8d4e86cea77

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp314-cp314t-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp314-cp314t-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 d9069c7d1a3c7fe23398c6bb0d72d253bce703045223a2d2f094f770028e77c0
MD5 891376f844a8d6c321b9b639ac1b47b4
BLAKE2b-256 6813c048f63fac94f8c9289c586cdd282b02dc3b225b8fef4798d0f75d3f3cf7

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp314-cp314t-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp314-cp314-win_amd64.whl.

File metadata

  • Download URL: ai2070_net-0.9.0-cp314-cp314-win_amd64.whl
  • Upload date:
  • Size: 1.5 MB
  • Tags: CPython 3.14, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for ai2070_net-0.9.0-cp314-cp314-win_amd64.whl
Algorithm Hash digest
SHA256 4bd6c4a0e2346866463a379756d692a4674da7f7309cb398be709b5fcfbf6fd3
MD5 5b2909cad27c2028ef1f01f83a38b4a3
BLAKE2b-256 761773257e5e46f9a77ea82c7dbcbd468b2789006c711497cdc7f706106f2689

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp314-cp314-win_amd64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp314-cp314-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp314-cp314-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 6385a50d4c2b24aa66105b5757483d47b1e4f6a0e1a05d46a0cf1905d455c225
MD5 284c8e56785555e7975d63dfe609a004
BLAKE2b-256 f17a43a65d79e31f879d48b5abbbae8114ce85ed255d1e88d9046e066126655b

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp314-cp314-manylinux_2_28_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp314-cp314-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp314-cp314-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 554015dbbfced09811e26e8a07c3ea0c7f0252da0256c9b260f63effb6d0e6f5
MD5 b4ff857dd988c08245503782c9bf0485
BLAKE2b-256 a357130f3f857d2b6fa05f65aa2dcd774407d71a9b85ed15a744df6c80ed69e8

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp314-cp314-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp314-cp314-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp314-cp314-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 a951c3001a85cf125ecf18e29ed0d6aeb64a22f0408ac21ac85c1e3cfb596521
MD5 a91a239226b3fd79e1c1eb7ae2cd97f9
BLAKE2b-256 ffb39f8a8ed422f580a7c114071f09d7e60bec6add6f04a30f1027647c180ee7

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp314-cp314-macosx_11_0_arm64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp314-cp314-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp314-cp314-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 28ffcbde33f878dfe14d40d2773c2b4be09000e91e3a4329931f1ee6b028a593
MD5 5645c7fc15b3bbb57490238609197617
BLAKE2b-256 d97cdadad3e35d469cce191adfa5f8932955de5b3a10ce150883ee668d35e0d4

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp314-cp314-macosx_10_12_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp313-cp313t-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp313-cp313t-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 c631a3c78197cba70960d5938c125a27f4b6a8bab4919bebee3183a7e88c572f
MD5 ec7d7c3b9512499b98100c14c43f19ff
BLAKE2b-256 29f35073d35b9d9890656b3562764d60bdab126ecddcec5f7d1ed3135f444ab0

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp313-cp313t-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: ai2070_net-0.9.0-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 1.6 MB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for ai2070_net-0.9.0-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 1c67652e6f1a1243297ce4acfb7f81deb23ae9f32c16bf1781f647e364c4b581
MD5 11b38b8cfc71617ee125ab4c336d70f5
BLAKE2b-256 d4c799ada01f154aa7783fa75b98b6ba2e435af6cd30edf7f2c69b18bd283dfd

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp313-cp313-win_amd64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp313-cp313-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp313-cp313-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 40e4363162a6681cb67d48528a26afcca481a764d738f7340e3aae831fcf11eb
MD5 bc5d0d89a6c184d01c710f4f1f126e73
BLAKE2b-256 5a021e30b139b44128313f1d0f7a83c20114e73c82e266d5518cce8490dc261b

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp313-cp313-manylinux_2_28_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp313-cp313-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp313-cp313-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 1a0904ffede2e3bb522d7d1f6f6af436689dab39021729f80a7f5db6c8ac958c
MD5 67e54cb68c3ccb36ca8905f880ca6f13
BLAKE2b-256 8f6ca6d1ee6f61db0eda9f10d01de94a54433609de52aa176a7a7a24ad8b6e8c

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp313-cp313-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 695a49af1a36a47f16f4424d5b67c07dc91ff7f92f37554bf7e7618aed95251b
MD5 00271d007dfad066aa521a61388304c6
BLAKE2b-256 f486d311de0f15059a59bc30ab1736b52cbc6dc703c243063aecad7f9b483779

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp313-cp313-macosx_11_0_arm64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp313-cp313-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp313-cp313-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 4e6d8e4f2ef64fbc1dbbc51346bf82ce3ae037f2e5331b4ecad9bf9d6f46bd58
MD5 e410cbbe770429363ccfcf28c571ac45
BLAKE2b-256 a1b62498498ec2b095775d6c90ae794c39cf8e5c0ea849e7229f110cfcdd277c

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp313-cp313-macosx_10_12_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: ai2070_net-0.9.0-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.6 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for ai2070_net-0.9.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 808e5027bd2490dc96f6dc1698ef6ca116e5b5a45e69e483b722f13cc0070323
MD5 df82796f7ca53b966dd479d2f59e796a
BLAKE2b-256 a3db8f9326c1cb51eb7cc5a1de6f2323e44c6df725b2da0f2f7964b23551ba1e

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp312-cp312-win_amd64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp312-cp312-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp312-cp312-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 0b08c461d54715a1ec5bd82ce202a6bb1ad2ae2280e0730e22146703aa03de6b
MD5 04a7e9e302613ecb2e75fa19a35030bb
BLAKE2b-256 c5c6ed705b28d7e2c4378608db8f4947a2885400f7a8db3bdefddceed029790d

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp312-cp312-manylinux_2_28_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp312-cp312-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp312-cp312-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 224e7dc1473180c2d412c1c27d54a35a779037df9bda4f6b0003357f39f1f9c5
MD5 cb7c99f3516c94cc24a49dfbdebafd64
BLAKE2b-256 269b7c85231aaef8cbeb359bc77ff53de12cda9d79a2e21ae8d0fd61198b688b

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp312-cp312-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 50b8433de386888a4b0c347cd163219bcfa1d515abaa244aef190a0dafd11c07
MD5 f7f740deb7cbee29cd6c3594c39b7e51
BLAKE2b-256 ee0199aeab6aa8a0ff22e44b6ce842c50029999aad5ef1474c1081ff269d78b4

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp312-cp312-macosx_11_0_arm64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp312-cp312-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp312-cp312-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 817a79954173db4de08e325e0e028dacedb2a4e3461acf16722e2150d126a16e
MD5 45f0c5ddf13061c0390ad3a8d2a091ce
BLAKE2b-256 e92289f588176333328909e64083c6103c627f2c912d7d246f2f6389f2db2535

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp312-cp312-macosx_10_12_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: ai2070_net-0.9.0-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.5 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for ai2070_net-0.9.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 370c0c75d58962af2d4cd466cc33f9b03b6d0edba83aed38d9ba7362145ffee7
MD5 ed39c63af69d2809b66e79dee6669b4f
BLAKE2b-256 3c2dee589d62f3d57669bf6d9600940ce7e980933cbb1a4b3c096d85c04ef853

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp311-cp311-win_amd64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp311-cp311-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp311-cp311-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 43970ead6cef14d27cca12c4bf4a715560553e18518ea6dee55d34e142357613
MD5 addeca88ad69dffca25d33b1a3bb988c
BLAKE2b-256 848af9deb214996f96652b3e9e6f7da951427a0d1454223fa81dc288c679dd70

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp311-cp311-manylinux_2_28_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp311-cp311-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp311-cp311-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 b574e5438661324e3336ba6476b8f63fbf82b4bdbea1cb78cd75a45511264563
MD5 5676b064d4ecb2dacdaa5a3d0c8aa004
BLAKE2b-256 696618d14aa27d02af240d1a7d31dead8592bf639d9d3dbcb69e84492c6b1c89

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp311-cp311-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 fd06255af59c0f77f03ea2b24c824c67f3ba0d3051410fe578ef1c1b731f9e3f
MD5 7904edb5075c504acd15ba6dc3bdcade
BLAKE2b-256 bfd0c06519cd8ac4dccdfcff0122d0b8d2d100df013fb93c2b21453aa475305d

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp311-cp311-macosx_11_0_arm64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp311-cp311-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp311-cp311-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 abc863243530208a9267e0480395bed737d2708a84b7b2cae5e7df50d9f39296
MD5 26332b9ce7cf7f22a7bfae460f5ae69d
BLAKE2b-256 d4ebbaa6ca8759a749c50cb8ab3b98e34e7c782786e6c24b34a5b0688d921103

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp311-cp311-macosx_10_12_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: ai2070_net-0.9.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.5 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for ai2070_net-0.9.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 2293c8e22477093041400dc4648d080bab11c12962985c099ed0af0a0afddeaa
MD5 1f7e0bf8d526e310ace789aaf7e841b2
BLAKE2b-256 566e9c01f284ebfde417627bcee1d27006257543c53a50c2d7312410fc479703

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp310-cp310-win_amd64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp310-cp310-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp310-cp310-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 79e4303102e5d80d9dceba888aba0a740f678b98d6cfff66af246e6496d44db2
MD5 8bd200ec5c80d8259350da3b03d66b35
BLAKE2b-256 49cef3e8cdbed0ef74ff7ffb3dbf27f7bdeb36df4e83411afd967519d01d0226

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp310-cp310-manylinux_2_28_x86_64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ai2070_net-0.9.0-cp310-cp310-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for ai2070_net-0.9.0-cp310-cp310-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 f8ffe44d33f0922ff8063bb1d8358864f636462d6bfe12da6645cc18e5070a2a
MD5 16822dbe74c151983baf3cdedd590dfe
BLAKE2b-256 237d0814b7f8d39b8f88efac686f1c945b6b4ee9ea4e8aa06922a3ca18e5a811

See more details on using hashes here.

Provenance

The following attestation bundles were made for ai2070_net-0.9.0-cp310-cp310-manylinux_2_28_aarch64.whl:

Publisher: release-python.yml on ai-2070/net

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page