Attestation toolkit for confidential GPUs (NVIDIA H100/H200/Blackwell, AMD support planned)
Project description
cgpu-attest — Confidential GPU Attestation Toolkit
A modular Python toolkit for attesting confidential GPUs. Verifies that GPU firmware, drivers, and configuration are authentic and untampered by validating hardware measurements against vendor-signed reference manifests.
Currently supported: NVIDIA H100, H200, and Blackwell (B100/B200/GB200). Planned: AMD confidential GPU support.
How attestation works
Confidential Computing GPUs produce cryptographically signed measurement reports during secure boot. Each report contains hashes of every firmware component loaded into the GPU. Attestation compares these runtime measurements against golden values published by the GPU vendor in signed Reference Integrity Manifests (RIMs).
The toolkit supports two attestation modes:
- Remote (default) — Evidence is collected from the GPU and submitted to the vendor's remote attestation service (e.g. NVIDIA NRAS), which validates everything server-side and returns a signed JWT.
- Local — The vendor SDK validates the certificate chain and measurements locally using OCSP and RIM files fetched from the vendor's RIM service. No evidence leaves your machine.
Installation
# Full install with NVIDIA SDK + JWT support (recommended for NVIDIA GPUs):
pip install "cgpu-attest[nvidia]"
# Minimal NVIDIA install (NVML fallback only, no SDK):
pip install "cgpu-attest[nvidia-minimal]"
# Base install (if you manage GPU libraries separately):
pip install cgpu-attest
Requirements
- Python 3.10+
- Linux with GPU driver supporting Confidential Computing
- For NVIDIA: driver ≥ 525 with CC mode enabled
Quick start
Command line
# Attest all detected GPUs (default: remote mode):
cgpu-attest
# Attest only H200 GPUs:
cgpu-attest --gpu-family H200
# Local mode (no evidence sent to remote service):
cgpu-attest --mode local
# Save results as JSON:
cgpu-attest --output results.json
# Behind a corporate proxy:
cgpu-attest --http-proxy http://proxy.corp:3128
Python API
from cgpu_attest import run_attestation
results = run_attestation(mode="remote")
for r in results:
print(f"{r.gpu_name}: {r.overall_status}")
# Or with more control:
from cgpu_attest import attest_gpu
from cgpu_attest.gpu_discovery import enumerate_gpus, init_nvml, shutdown_nvml
from cgpu_attest.orchestrator import generate_nonce
init_nvml()
for gpu in enumerate_gpus():
result = attest_gpu(gpu, nonce=generate_nonce(), mode="local")
print(result.overall_status, result.claims)
shutdown_nvml()
CLI reference
cgpu-attest [OPTIONS]
| Option | Description |
|---|---|
--mode {remote,local} |
remote sends evidence to attestation service (default); local validates via SDK + OCSP |
--gpu-family FAMILY |
Only attest GPUs of this family (e.g. H200, H100). Omit to attest all |
--nras-url URL |
Custom NVIDIA Remote Attestation Service endpoint |
--ocsp-url URL |
Custom OCSP endpoint |
--output FILE |
Write JSON results to FILE |
--http-proxy URL |
HTTP/HTTPS proxy for outbound requests |
--verbose, -v |
Enable DEBUG logging |
--test-rim-dir DIR |
Testing only. Use local RIM files instead of vendor service |
Package structure
cgpu_attest/
├── __init__.py # Public API
├── __main__.py # python -m cgpu_attest
├── cli.py # Argument parsing, summary table
├── constants.py # Service URLs, claim keys, GPU profile registry
├── deps.py # Lazy-import guards (pynvml, SDK, PyJWT)
├── models.py # GpuInfo, AttestationEvidence, AttestationResult
├── gpu_discovery.py # NVML init/shutdown, enumerate_gpus()
├── evidence.py # Evidence collection (SDK + NVML fallback)
├── jwt_helpers.py # JWT decoding, SDK token list parsing
├── attest_remote.py # Remote attestation (SDK + REST fallback)
├── attest_local.py # Local attestation (SDK + OCSP)
├── orchestrator.py # attest_gpu(), run_attestation()
├── testing.py # Dev/test only: local RIM directory patching
└── gpu_profiles/ # One file per GPU family
├── h200.py # NVIDIA H200
├── h100.py # NVIDIA H100
└── blackwell.py # NVIDIA B100, B200, GB200
Adding a new GPU family
Create a new file in gpu_profiles/ and register the profile:
# gpu_profiles/mi300x.py
"""AMD MI300X GPU profile."""
from cgpu_attest.constants import register_gpu_profile
register_gpu_profile(
"MI300X",
name_patterns=["MI300X"],
architecture="CDNA3",
)
Then import it in gpu_profiles/__init__.py:
from cgpu_attest.gpu_profiles import h100, h200, blackwell, mi300x
No other code changes needed — the new GPU will be auto-detected and attested.
JSON output format
When using --output, results are written as:
{
"tool_version": "2.0",
"mode": "remote",
"timestamp": "2026-04-01T08:00:00Z",
"results": [
{
"gpu_uuid": "GPU-9ef0b912-...",
"gpu_name": "NVIDIA H200 NVL",
"overall_status": "PASS",
"claims": { ... },
"token": "<JWT string>",
"errors": [],
"verified_at": "2026-04-01T08:00:00Z"
}
]
}
Testing with local RIM files
Development/testing feature only. Disables RIM signature verification. Never use in production.
cgpu-attest --test-rim-dir ./local_rims
See PUBLISHING.md for instructions on publishing to PyPI.
Measurement indices (NVIDIA Hopper)
When running with --verbose, the SDK logs 64 measurement indices (0–63). Key indices for H100/H200:
| Index | Component | Description |
|---|---|---|
| 7 | FSP firmware | Hardware root of trust |
| 21–22 | VBIOS | VBIOS image and configuration |
| 25–27 | PMU / GSP-RM / ACR | Core GPU trusted execution firmware |
| 29–31 | Driver | Kernel driver, config, GSP firmware |
| 37–41 | Additional firmware | Secondary microcontrollers |
License
MIT — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cgpu_attest-1.0.0.tar.gz.
File metadata
- Download URL: cgpu_attest-1.0.0.tar.gz
- Upload date:
- Size: 35.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7ce6216d21dd765c126782f67a917aae17385e90f9542fe70cd0f2023f3d326c
|
|
| MD5 |
dcc03d6b10e22a2a387585a3cec9c198
|
|
| BLAKE2b-256 |
6b8f6c7b31e66210e3822b2064a76488cb3cec201138e122e9c3bb0374da1b44
|
File details
Details for the file cgpu_attest-1.0.0-py3-none-any.whl.
File metadata
- Download URL: cgpu_attest-1.0.0-py3-none-any.whl
- Upload date:
- Size: 41.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5395c241bc5ce543000b4f10f731704e70a0c8c060d79c45d94e434433fe58cb
|
|
| MD5 |
1dbcabc5685136038e663101b4ebeeea
|
|
| BLAKE2b-256 |
dc20dc878e8a468f974c773b51562fb2ef56e2ed2277553660a5ba15b2496137
|