Skip to main content

Taichi Forge - a community-maintained fork of the Taichi Programming Language (import name: taichi_forge).

Project description

Taichi Forge

A community-maintained fork of taichi focused on compile-time performance, modern toolchains (LLVM 20, VS 2026, Python 3.10-3.14), and tighter compile-time safety rails.

license


Install

pip install taichi-forge

The import name is unchanged — existing code continues to work as-is:

import taichi as ti
ti.init(arch=ti.cuda)

Every public API from upstream Taichi 1.7.4 that we still ship behaves the same way.


Why a fork?

Upstream Taichi 1.7.4 shipped against LLVM 15, Python ≤ 3.12, and the Visual Studio 2019/2022 toolchain. Since then the JIT ecosystem has moved on:

  • LLVM 15 no longer compiles cleanly with current CUDA / NVPTX toolchains.
  • Python 3.13 dropped distutils; 3.14 removes further deprecated stdlib APIs.
  • Modern Windows developer setups default to VS 2026 (MSVC 14.50+), which rejects some headers hard-wired in the original build scripts.

Taichi Forge is the rolling result of those maintenance upgrades, along with compile-time performance improvements that reduce cold-start and warm-start latency.


Headline feature: sparse SNode on Vulkan

Vanilla Taichi 1.7.4's Vulkan/SPIRV backend supports only dense + root. Every other SNode type — pointer, bitmasked, dynamic, hash — falls back to a TI_NOT_IMPLEMENTED on Vulkan, blocking macOS-via-MoltenVK, Linux-AMD-without-ROCm, and mobile / embedded users from running sparse data structures at all.

Taichi Forge 0.3.0 ships pointer / bitmasked / dynamic on Vulkan with end-to-end three-backend (cpu / cuda / vulkan) numerical equivalence. This is the fork's headline functional differentiator and is fully validated under the regression matrix in tests/p4/ (vulkan_pointer_*.py, vulkan_bitmasked_*.py, vulkan_dynamic_basic.py, g2_pool_fraction.py, g4_probe.py, g8_cache_compat.py).

SNode type vanilla 1.7.4 Vulkan Taichi Forge 0.3.0 Vulkan
dense
bitmasked
pointer
dynamic
quant_array / bit_struct ⚠️ experimental (read + write + concurrent ti.atomic_add via CAS-loop; opt-in via vulkan_quant_experimental=True)
hash ❌ (default-disabled in upstream Python frontend; see feasibility note below)

Highlights:

  • No new public API — existing ti.root.pointer(...).dense(...).place(...) / ti.activate / ti.deactivate / ti.is_active / ti.length / ti.append Just Work on Vulkan with the same semantics as the LLVM backends.
  • Static capacity by design — Vulkan has no device-side dynamic allocator, so each pointer / dynamic SNode reserves its worst-case cell pool in the root buffer at compile time. Out-of-capacity activates degrade silently rather than crashing (a cap_v guard verified by vulkan_pointer_race.py).
  • Memory knob TI_VULKAN_POOL_FRACTION — opt-in env var (∈ (0, 1], default 1.0) shrinks the pointer pool to max(num_cells_per_container, round(total × fraction)). Combine with the G1.b deactivate-freelist (always on) to handle steady-state working sets far below the worst case. Verified at 0.25 against three-backend equivalence.
  • dynamic uses a flat-array + length-suffix protocol on Vulkan instead of LLVM's chunk-list (no shader-side malloc exists). ti.append / ti.length / ti.deactivate preserve full LLVM semantics; total capacity is the static N.
  • Offline cache cross-version safety — corrupt or version-mismatched ticache.tcb automatically triggers fallback recompile, never crashes (verified end-to-end by g8_cache_compat.py).
  • Experimental quant_array / bit_struct on Vulkan — opt-in via ti.init(arch=ti.vulkan, vulkan_quant_experimental=True) or env var TI_VULKAN_QUANT=1. With the gate ON, QuantInt / QuantFixed member reads, writes, and concurrent multi-thread ti.atomic_add (via SPIR-V OpAtomicCompareExchange spin RMW) are byte-equivalent to the LLVM backends, including multi-field BitpackedFields packing (verified by tests/p4/g9_quant_baseline.py MPM-style 11/11/10 packing, tests/p4/g9_quant_array_baseline.py, and the same-word race baseline tests/p4/g9_quant_atomic_race.py). Default OFF preserves vanilla 1.7.4 behaviour exactly. QuantFloat shared-exponent and the non-add atomic ops (atomic_min/max/and/or/xor, identical restriction to LLVM) are explicitly out of scope and raise a clear TI_NOT_IMPLEMENTED rather than miscompiling.

📖 Full usage guide and limitations (bilingual): docs/forge/sparse_snode_on_vulkan.en.md / docs/forge/sparse_snode_on_vulkan.zh.md — covers static-capacity semantics, the TI_VULKAN_POOL_FRACTION knob, dynamic-protocol differences, troubleshooting, and the verification matrix.

📖 All fork-only knobs (compile / runtime / architecture / modernization options): docs/forge/forge_options.en.md / docs/forge/forge_options.zh.md.

hash SNode remains permanently deferred (no real-time physics or rendering pipeline depends on it; see the user guide §6.1 for the survey). quant_array / bit_struct ship experimental scaffolding on Vulkan in 0.3.0 — frontend gate is opt-in via vulkan_quant_experimental=True; codegen is incremental. See user guide §7.

Quick example (Vulkan-on-anything)

import taichi as ti
ti.init(arch=ti.vulkan)              # works on macOS via MoltenVK, Linux-AMD without ROCm, etc.

x = ti.field(ti.f32)
ti.root.pointer(ti.ij, 32).dense(ti.ij, 8).place(x)

@ti.kernel
def fill():
    for i, j in ti.ndrange(256, 256):
        if (i + j) % 17 == 0:
            x[i, j] = i * 0.1 + j * 0.01

fill()
print(x.to_numpy().sum())            # identical to ti.cpu / ti.cuda

Supported toolchain

Area Requirement
Python 3.10 – 3.14 (3.9 dropped)
Windows MSVC VS 2026 (Visual Studio 17 2026, MSVC 14.50+)
LLVM 20.1.7 (included in the wheel)
CMake 3.20+
CUDA (optional) NVCC 12.x

Validated backends

End-to-end tested on Linux x86_64 and Windows x86_64:

  • ✅ CPU (LLVM JIT)
  • ✅ CUDA
  • ✅ Vulkan
  • ✅ OpenGL / GLES

Not yet regression-tested since the LLVM 20 migration:

  • ⚠️ macOS (Apple Silicon / Intel) — Metal backend
  • ⚠️ AMDGPU backend
  • ⚠️ Android ARM64 (C-API)

Patches and reports welcome.


New APIs and settings (fork-only)

All additions are strictly opt-in; default values preserve bit-identical behaviour vs. upstream 1.7.4.

New functions

Symbol Purpose
ti.compile_kernels(kernels) Pre-compile a list of kernels on a background thread pool before the hot loop. Accepts decorated kernels or (kernel, args_tuple) pairs. Returns the number of kernels submitted.
ti cache warmup script.py CLI command — runs script.py once with the offline cache forced on, warming up kernel artifacts for subsequent cold starts.
ti.compile_profile() Context manager — on exit, prints a per-pass timing report and optionally writes a CSV / Chrome trace.
@ti.kernel(opt_level=...) Per-kernel LLVM optimization level override ("fast" / "balanced" / "full" or 0–3). Cache key is isolated per override.

ti.init(...) / CompileConfig knobs

Kwarg Default Purpose
compile_tier "balanced" "fast" lowers LLVM to -O0 (floor -O1 on NVPTX/AMDGCN) and SPIR-V optimizer to level 1. "full" preserves pre-fork behaviour.
llvm_opt_level -1 (use tier) Explicit LLVM -O override (0–3).
spv_opt_level -1 (use tier) Explicit SPIR-V spirv-opt optimization level override.
num_compile_threads logical-core count Thread pool size for ti.compile_kernels.
unrolling_hard_limit 0 (off) Per-ti.static(for ...) unroll iteration cap. Aborts with TaichiCompilationError instead of silently burning seconds.
unrolling_kernel_hard_limit 0 (off) Total unroll iteration cap across a single kernel.
func_inline_depth_limit upstream default Hard cap on @ti.func inline recursion depth.
cache_loop_invariant_global_vars False Set True to opt in to SNode loop-invariant caching in hot loops. (Default matches vanilla 1.7.4.)
use_fused_passes False Enable pipeline_dirty short-circuit for redundant full_simplify invocations. Numerically bit-identical to off.
tiered_full_simplify True Splits full_simplify into a local fixed-point pass followed by a single global round per iteration. Set False to match the legacy cadence.
compile_dag_scheduler True Anti-saturation scheduler for ti.compile_kernels batches; balances inner LLVM thread pool and outer kernel pool. Set False for the legacy two-tier model.
spirv_parallel_codegen False Opt-in task-level parallel SPIR-V codegen per kernel.
spirv_disabled_passes [] Per-call disable list for individual spirv-opt passes (e.g. ["loop-unroll"]).
auto_real_function False Auto-promote expensive @ti.func instances to is_real_function=True (LLVM-only, non-autodiff).
auto_real_function_threshold_us 1000 Promotion threshold in microseconds of estimated compile cost.
cuda_sparse_pool_size_GB 0.0 (use device_memory_GB) CUDA-only. Explicit override for the sparse SNode dynamic-allocation pool, in GiB. 0 (default) preserves vanilla 1.7.4 semantics: the pool size equals device_memory_GB (or device_memory_fraction × total_VRAM). Set to a positive value to size the sparse pool independently of the dense allocation.
cuda_sparse_pool_auto_size False CUDA-only, opt-in. When True and neither device_memory_fraction nor cuda_sparse_pool_size_GB is set, the sparse pool is auto-sized from the SNode tree (capped by device_memory_GB, floored at cuda_sparse_pool_size_floor_MiB). Default False preserves vanilla 1.7.4 behaviour where device_memory_GB is the actual pool size, not just a cap. Verify the heuristic covers your workload's NodeAllocator activation peak before enabling.
cuda_sparse_pool_size_floor_MiB 128 CUDA-only. Floor (in MiB) for the auto-sized sparse pool when cuda_sparse_pool_auto_size=True. Each NodeAllocator chunk is ~16 MiB. No-op when auto-sizing is off.
spirv_listgen_subgroup_ballot False Vulkan/SPIR-V only. Aggregates the per-thread OpAtomicIAdd into one subgroup-ballot atomic per active subgroup in the listgen kernel. Reduces atomic contention on dense-active sparse struct-for. Output SPIR-V differs and is keyed into the offline cache hash.
listgen_static_grid_dim False CUDA / AMDGPU only. Launches sparse-listgen kernels with a grid_dim derived from the static upper bound on parent-element count, eliminating idle blocks on shallow sparse trees. Vulkan already computes the equivalent quantity, so this flag is a no-op for SPIR-V.

Compatibility note

  • SNode.snode_tree_id — backported from upstream master (not in 1.7.4 release); available on all backends.
  • offline_cache_l_sem — internal/testing flag, default off. Not for production use.

Quick start

import taichi as ti

ti.init(arch=ti.cuda, compile_tier="fast")

@ti.kernel
def add(a: ti.types.ndarray(), b: ti.types.ndarray(), c: ti.types.ndarray()):
    for i in a:
        c[i] = a[i] + b[i]

import numpy as np
n = 1 << 20
a = np.random.rand(n).astype(np.float32)
b = np.random.rand(n).astype(np.float32)
c = np.empty_like(a)
add(a, b, c)

Pre-compiling a batch of kernels (fork-only)

import taichi as ti
ti.init(arch=ti.cuda)

@ti.kernel
def k1(x: ti.types.ndarray()): ...
@ti.kernel
def k2(x: ti.types.ndarray(), y: ti.types.ndarray()): ...

# Specialize + compile both on the thread pool before the hot loop.
ti.compile_kernels([k1, k2])

Command-line cache warmup (fork-only)

ti cache warmup train.py -- --epochs 1
# Subsequent `python train.py` runs start with a populated offline cache.

Building from source

git clone https://github.com/fancifulland2718/taichi-forge/taichi.git
cd taichi
python -m pip install -r requirements_dev.txt
python -m pip install -e . --no-build-isolation -v

The build is driven entirely by pyproject.toml / scikit-build-core. On Windows, build a local LLVM 20 snapshot first:

.\scripts\build_llvm20_local.ps1   # produces dist\taichi-llvm-20\

Versioning

Taichi Forge uses its own SemVer track starting at 0.1.2. Fork release numbers do not match upstream taichi versions.

  • 0.1.x — LLVM 20 + VS 2026 + Python 3.14 + initial compile-performance improvements. Backends: Linux/Windows x86_64, CUDA, Vulkan, OpenGL, GLES, CPU.
  • 0.2.x — deeper compile-time, runtime cache, and toolchain modernization. Stabilization line, superseded by 0.3.0.
  • 0.3.x — sparse SNode (pointer / bitmasked / dynamic) on Vulkan + experimental quant_array scaffolding on Vulkan. Current line.

Release notes

0.3.7 (current) — sparse-pool sizing hotfix

Reverts the default behaviour change introduced in 0.3.5/0.3.6 that broke device_memory_GB semantics for sparse workloads.

Bug fix

  • 0.3.5 and 0.3.6 unconditionally auto-sized the CUDA sparse pool from the SNode tree, treating device_memory_GB as a cap rather than the actual pool size. On MPM-shaped sparse trees the heuristic produced pools too small to accommodate runtime NodeAllocator activation, causing silent OOM on from_numpy / first-write kernels even when the user explicitly raised device_memory_GB.
  • Auto-sizing is now opt-in via the new cuda_sparse_pool_auto_size=True kwarg (default False). Default behaviour matches vanilla taichi 1.7.4 exactly: the sparse pool size equals device_memory_GB (or device_memory_fraction × total_VRAM when set).
  • cuda_sparse_pool_size_GB > 0 continues to act as an explicit override; it bypasses every other sizing path.
  • cuda_sparse_pool_size_floor_MiB is now a no-op when cuda_sparse_pool_auto_size=False (the default).

Compatibility

  • Default device_memory_GB semantics restored to vanilla 1.7.4.
  • Users who relied on the 0.3.5/0.3.6 implicit auto-sizing should add cuda_sparse_pool_auto_size=True to their ti.init(...) call.
  • Public Python and C-API surfaces unchanged versus 0.3.0.

0.3.5 / 0.3.6

Maintenance line on top of 0.3.0; introduced new sparse-pool sizing knobs and listgen optimisation flags. Note: the implicit sparse-pool auto-sizing default introduced in this line was reverted in 0.3.7 (see above) because it silently changed device_memory_GB semantics for sparse workloads.

Sparse struct-for performance (still active in 0.3.7)

  • New spirv_listgen_subgroup_ballot knob (Vulkan/SPIR-V, opt-in) — aggregates the per-thread atomic in the listgen kernel into one subgroup-ballot atomic per subgroup, reducing contention on dense-active sparse struct-for. Output SPIR-V is offline-cache-keyed.
  • New listgen_static_grid_dim knob (CUDA / AMDGPU, opt-in) — derives sparse-listgen grid_dim from the static parent-element upper bound, eliminating idle blocks on shallow sparse trees. No-op on Vulkan (which already computes the equivalent quantity).
  • Both flags default OFF; output is bit-identical to the legacy path with the flag off, and offline-cache-keyed when on.

0.3.0

First release with sparse SNode on Vulkan as a public feature. Inherits the full 0.2.x compile-time, runtime-cache, IR-pass, and dependency-modernization stack (every knob below remains available and bit-identical to 0.2.4 with defaults off).

Sparse SNode on Vulkan (headline functional differentiator)

  • pointer / bitmasked / dynamic SNodes now run end-to-end on the Vulkan/SPIRV backend, with three-backend (cpu / cuda / vulkan) numerical equivalence verified across 30+ tests in tests/p4/.
  • Static-capacity model: pool size = total_num_cells_from_root (worst case), no shader-side dynamic allocator. Out-of-capacity activates degrade silently via the cap_v guard rather than crashing the device.
  • New env var TI_VULKAN_POOL_FRACTION (∈ (0, 1], default 1.0) shrinks per-pointer pool capacity for memory-tight steady-state workloads. Combined with the deactivate-freelist (always on), supports re-activation cycles without leaking root-buffer space.
  • dynamic SNode uses a flat-array + length-suffix protocol on Vulkan (length atomic-stored at cell_stride * N offset of each container) — full LLVM ti.append / ti.length / ti.deactivate semantics preserved, no chunk-list.
  • Offline cache cross-version safety: corrupt or version-mismatched ticache.tcb triggers fallback recompile, never crashes (validated by g8_cache_compat.py three-phase test).
  • Build-time guards: TI_VULKAN_POINTER / TI_VULKAN_DYNAMIC / TI_VULKAN_POINTER_POOL_FRACTION CMake flags (all default ON) allow byte-for-byte revert to vanilla 1.7.4 behaviour for regression bisecting.
  • hash SNode remains unimplemented on every backend in this fork — vanilla taichi 1.7.4 itself default-disables ti.root.hash(...) in the Python frontend, and no real-world demo currently depends on it. See the user guide §6 for substitutes (pointer + TI_VULKAN_POOL_FRACTION, or user-level hash + dense buckets).
  • Full user guide (limitations, env vars, troubleshooting, verification matrix): docs/forge/sparse_snode_on_vulkan.en.md / docs/forge/sparse_snode_on_vulkan.zh.md.

Compile-time performance

  • Fused-pass driver: use_fused_passes adds a pipeline_dirty short-circuit around full_simplify so that no-op iterations are skipped. Measured ~48.6% of full_simplify invocations are observably no-op on representative workloads.
  • Tiered full_simplify (tiered_full_simplify, default on): splits the legacy fixed-point loop into a local fixed-point phase plus a single global round per outer iteration, while preserving final IR.
  • DAG-aware scheduler for ti.compile_kernels (compile_dag_scheduler, default on): balances the inner LLVM thread pool against the outer kernel pool to avoid thread oversubscription on batch warm-up.
  • Single-offload bypass on the LLVM CPU path: removes the prior 0.89× CPU regression introduced by earlier batch-compile work.
  • Per-kernel opt_level= override and compile_tier="fast"|"balanced"|"full" presets, with isolated cache keys so mixed-tier batches do not poison each other.
  • SPIR-V pipeline gains a per-call spirv_disabled_passes allowlist, with cache-key isolation. Disabling loop-unroll alone gives ~54% SPIR-V codegen wall-time reduction on the validated Vulkan suite; disabling the three heaviest passes gives ~61%, with byte-identical kernel results.
  • Optional task-level parallel SPIR-V codegen per kernel (spirv_parallel_codegen).
  • Auto real-function promotion (auto_real_function + auto_real_function_threshold_us) and budget-aware inlining fallback in the LLVM-only path; both default off.

Offline cache and runtime caches

  • Parallel disk-read for offline cache: metadata-hit but ckd-miss path now reads outside the cache mutex and serializes duplicate requests via an in-progress key set. Validated 12-kernel × Vulkan double-process cold start: 290.1 ms (prime) → 83.1 ms (hit), 3.49× faster with byte-identical per-kernel artifacts.
  • CompileConfig key audit + offline-cache schema versioning: unrecognized cache versions now fall back to recompile cleanly instead of crashing.
  • rhi_cache.bin now uses atomic write-then-rename to eliminate half-written cache files after abrupt termination.

IR / passes

  • pipeline_dirty is now explicit and OR-combined across the five mutating passes that can dirty the pipeline, removing spurious dirty marks at no-op call sites. Validated across CPU / CUDA / Vulkan smoke matrices with no regression.
  • Defensive assert + "type-query forbidden zone" notes on linking_context_data->llvm_context to catch accidental cross-context type queries early.

Toolchain and third-party libraries

  • spdlog 1.14.1 → 1.15.3.
  • Vulkan-Headers / volk / SPIRV-Headers / SPIRV-Tools aligned to Vulkan SDK 1.4.341 as a single coordinated bump.
  • googletest 1.10.0 → 1.17.0 (test-only, no runtime impact).
  • glm 0.9.9.8+187 → 1.0.3.
  • imgui v1.84 (WIP) → v1.91.9b (non-docking branch). The Vulkan backend was migrated to the new ImGui_ImplVulkan_InitInfo layout (RenderPass + ApiVersion fields, self-managed font texture, LoadFunctions(api_version, loader) signature). GGUI visual-regression suite: 90 / 90 passing on Vulkan + CUDA backends.

Compatibility

  • All public Python and C-API surfaces from upstream Taichi 1.7.4 remain unchanged. New configuration knobs are additive; their defaults preserve pre-fork behaviour.
  • Build toolchain: LLVM 20.1.7, MSVC 14.50+ (VS 2026), Python 3.10–3.14 — unchanged from 0.1.x.

License

Apache 2.0, same as upstream. See LICENSE. All upstream copyright notices are preserved.


Acknowledgements

Taichi Forge is built on top of the work of the upstream Taichi developers at taichi-dev/taichi. The core compiler, runtime, and the vast majority of the Python frontend are theirs. This fork carries only the delta described above.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

taichi_forge-0.3.7-cp314-cp314-win_amd64.whl (87.6 MB view details)

Uploaded CPython 3.14Windows x86-64

taichi_forge-0.3.7-cp314-cp314-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl (67.4 MB view details)

Uploaded CPython 3.14manylinux: glibc 2.34+ x86-64manylinux: glibc 2.35+ x86-64

taichi_forge-0.3.7-cp313-cp313-win_amd64.whl (85.4 MB view details)

Uploaded CPython 3.13Windows x86-64

taichi_forge-0.3.7-cp313-cp313-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl (67.4 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.34+ x86-64manylinux: glibc 2.35+ x86-64

taichi_forge-0.3.7-cp312-cp312-win_amd64.whl (85.4 MB view details)

Uploaded CPython 3.12Windows x86-64

taichi_forge-0.3.7-cp312-cp312-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl (67.4 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.34+ x86-64manylinux: glibc 2.35+ x86-64

taichi_forge-0.3.7-cp311-cp311-win_amd64.whl (85.4 MB view details)

Uploaded CPython 3.11Windows x86-64

taichi_forge-0.3.7-cp311-cp311-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl (67.4 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.34+ x86-64manylinux: glibc 2.35+ x86-64

taichi_forge-0.3.7-cp310-cp310-win_amd64.whl (85.4 MB view details)

Uploaded CPython 3.10Windows x86-64

taichi_forge-0.3.7-cp310-cp310-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl (67.4 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.34+ x86-64manylinux: glibc 2.35+ x86-64

File details

Details for the file taichi_forge-0.3.7-cp314-cp314-win_amd64.whl.

File metadata

File hashes

Hashes for taichi_forge-0.3.7-cp314-cp314-win_amd64.whl
Algorithm Hash digest
SHA256 db18dc8dc16587f0493e8681fa067481e15df3238033601df0b2eda129822331
MD5 a1c2ee1b5a7e1e112b4c84412b5fa642
BLAKE2b-256 cfec38c6827f2eae9e32fc0c5151ec0b3b0fb4680260272a8657b83a414862d3

See more details on using hashes here.

Provenance

The following attestation bundles were made for taichi_forge-0.3.7-cp314-cp314-win_amd64.whl:

Publisher: publish_pypi.yml on fancifulland2718/taichi-forge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file taichi_forge-0.3.7-cp314-cp314-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for taichi_forge-0.3.7-cp314-cp314-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 2a4156a915b9f7c405d3fd9458f2fd58e0ff87e16e6d85254c94b52ede785395
MD5 32931a492d278378650fb579c1629ac2
BLAKE2b-256 a7337852b8b81d94574d38d680561cfd7a825c746e1949226f491b4bac580584

See more details on using hashes here.

Provenance

The following attestation bundles were made for taichi_forge-0.3.7-cp314-cp314-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl:

Publisher: publish_pypi.yml on fancifulland2718/taichi-forge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file taichi_forge-0.3.7-cp313-cp313-win_amd64.whl.

File metadata

File hashes

Hashes for taichi_forge-0.3.7-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 b514ea52764988a54d12606509003332463b85591006b66293e927e98efc2765
MD5 62db95ba12a9c2b86813c3a27586144d
BLAKE2b-256 e435b0a92b32f38d8bb41b0b03fd2e89a762e545cfb0b8bd8cf8b9d3f07bb969

See more details on using hashes here.

Provenance

The following attestation bundles were made for taichi_forge-0.3.7-cp313-cp313-win_amd64.whl:

Publisher: publish_pypi.yml on fancifulland2718/taichi-forge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file taichi_forge-0.3.7-cp313-cp313-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for taichi_forge-0.3.7-cp313-cp313-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 5184bd0e2d2628466300e01592da7ef4b6f75f46a7e731e2b0423165baf34e3a
MD5 c9902412a08a3ef87ddd75f97638fb7f
BLAKE2b-256 b01391957e6e55cfb8b7a66a2ab3bdd45ce06c8caf8f4e39652e79db81acb832

See more details on using hashes here.

Provenance

The following attestation bundles were made for taichi_forge-0.3.7-cp313-cp313-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl:

Publisher: publish_pypi.yml on fancifulland2718/taichi-forge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file taichi_forge-0.3.7-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for taichi_forge-0.3.7-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 2208397148edc5221d77d27514abb0dc99f2a52a2283552ef2d52f5c5c5e527b
MD5 8e9a61444667489b6cdc4db8dc75cff0
BLAKE2b-256 d5cfb8c5a067a11e1745a01f451ee58f6f7fa341512fc001d2cfefc120910f8a

See more details on using hashes here.

Provenance

The following attestation bundles were made for taichi_forge-0.3.7-cp312-cp312-win_amd64.whl:

Publisher: publish_pypi.yml on fancifulland2718/taichi-forge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file taichi_forge-0.3.7-cp312-cp312-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for taichi_forge-0.3.7-cp312-cp312-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 fbf37ddd3b03e2b3aa5bb1c75c801a25c319db9156d387f401d86aba476e25cf
MD5 cc66bd3ce21acc8cd3870d6acb5ea0e8
BLAKE2b-256 5f560f165b7c76554f02038b0d8678281708c42abee9d3742c8b476db838d6b3

See more details on using hashes here.

Provenance

The following attestation bundles were made for taichi_forge-0.3.7-cp312-cp312-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl:

Publisher: publish_pypi.yml on fancifulland2718/taichi-forge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file taichi_forge-0.3.7-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for taichi_forge-0.3.7-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 6d51f1fc5a8aa8aa5c80400add4bb4d871746499811d80e70703b9cbc6c7a8e4
MD5 94d15ebf3dd65b0a81d8fc2b150066d2
BLAKE2b-256 b28e6495f3b0413faed25c48900fc915399b3968c9c4af979971be9862b95402

See more details on using hashes here.

Provenance

The following attestation bundles were made for taichi_forge-0.3.7-cp311-cp311-win_amd64.whl:

Publisher: publish_pypi.yml on fancifulland2718/taichi-forge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file taichi_forge-0.3.7-cp311-cp311-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for taichi_forge-0.3.7-cp311-cp311-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 d430192b0bd5233c315381b1e8463d6ea36f6256887a3fec56c36751dec49ec2
MD5 9938e406fcb115d2512ab72e323c2ff4
BLAKE2b-256 e480c426cd070e2fe7d291792713bded0d306a1b0b8b9fc921bf3436d8a86bbf

See more details on using hashes here.

Provenance

The following attestation bundles were made for taichi_forge-0.3.7-cp311-cp311-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl:

Publisher: publish_pypi.yml on fancifulland2718/taichi-forge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file taichi_forge-0.3.7-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for taichi_forge-0.3.7-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 a1c69c515be4c8ba3b2c070d82719e1954d937b11a6c83ece88a96197058d5ee
MD5 87868df708f6a5d2b8c089d40104ca4e
BLAKE2b-256 86c3b0230417b61bd5ab5c34d37c94a5324c5a99a6153246ac8bdb52efccc059

See more details on using hashes here.

Provenance

The following attestation bundles were made for taichi_forge-0.3.7-cp310-cp310-win_amd64.whl:

Publisher: publish_pypi.yml on fancifulland2718/taichi-forge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file taichi_forge-0.3.7-cp310-cp310-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for taichi_forge-0.3.7-cp310-cp310-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 cf4187d48c8dff3ce73a479560cc0d4ff20c5606fdfdf6398dd88df202b15d6e
MD5 127e45e8e7c2966c14370700add42880
BLAKE2b-256 f5304ec1330437f1e65187b9f4aa650ac244650f0d5d78ae5b0c50f2b1e9ad88

See more details on using hashes here.

Provenance

The following attestation bundles were made for taichi_forge-0.3.7-cp310-cp310-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl:

Publisher: publish_pypi.yml on fancifulland2718/taichi-forge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page