Skip to main content

Add your description here

Project description

SecureAI

SecureAI is a Python library that adds RATLS (Remote Attestation TLS) support to popular HTTP clients, including OpenAI SDK and httpx. It enables applications to cryptographically verify that AI inference and API services are running inside Trusted Execution Environments (TEEs) like Intel TDX before sending sensitive data.

The library transparently extends existing clients - simply specify which hostnames require TEE attestation, and SecureAI handles the verification automatically during the TLS handshake.

Installation

SecureAI uses uv for dependency management and building.

You can install SecureAI from PyPI or build it from source.

# From PyPI
uv pip install secureai

# From source
git clone https://github.com/concrete-security/secureai.git
cd secureai
uv build # to build the wheel
uv pip install dist/secureai-*.whl

What is RATLS?

Remote Attestation TLS (RATLS) extends standard TLS with hardware-based attestation to verify that a server is running inside a Trusted Execution Environment (TEE) like Intel TDX. This ensures your data is processed in a secure, isolated environment.

RATLS provides cryptographic proof that the client is communicating with the correct server identity (as defined in the TLS certificate) and that the server is running inside a TEE.

Context

The TEE server maintains an event log that records all significant operations, including TLS certificate renewals. When the server generates a new certificate (using keys created inside the TEE that never leave it), it appends an event to this log containing the certificate hash.

The TEE hardware uses these event logs to compute Runtime Measurements (RTMRs) - cryptographic hashes that reflect the entire state and history of the TEE. These RTMRs are included in the attestation quote and can be verified by clients to ensure the TEE is running expected software with the expected certificate.

How it works

  • Pre-RATLS Setup (happens before client connects): Server adds a certificate event to its event log whenever it renews its TLS certificate. This updates the RTMR3 register using the new certificate hash.
  • TLS Connection: Client establishes a standard TLS connection with the server and retrieves the TLS certificate.
  • Quote Request: Client sends random challenge data (64 bytes) and requests a cryptographic quote from the TEE.
  • Quote Response: Server generates and returns a quote signed by the TEE hardware, along with metadata:
    • Quote contains: random challenge data, runtime measurements (RTMRs)
    • Metadata contains: event log with TLS certificate hash
  • Verification: Client verifies:
    • Quote signature using the DCAP library
    • TLS certificate (current session) matches the one in the event log
    • Event log correctly produces the RTMRs by replaying all events
    • TEE measurements match expected values
    • TCB status is UpToDate
Client                                    Server (TEE)
  |----- Pre-RATLS ---------------------------|
  |                                           |
  |                                           |
  |                                     0. Append new event to the
  |                                        event log with cert hash
  |                                        when doing cert renewal
  |                                           |
  |                                           |
  |----- RATLS -------------------------------|
  |                                           |
  | 1. TLS Handshake                          |
  |<=========================================>|
  |   (Get TLS certificate)                   |
  |                                           |
  | 2. POST /tdx_quote                        |
  |    { report_data: <random_64_bytes> }     |
  |------------------------------------------>|
  |                                           |
  |                                     3. Generate Quote + Metadata
  |                                      - Quote include report_data, RTMRs, ...
  |                                      - Metadata include event_log containing cert hash
  |                                      - Sign with TEE hardware key
  |                                      - Other measurements
  | 4. Quote Response                         |
  |<------------------------------------------|
  |                                           |
  | 5. Client Verification                    |
  |  - Verify quote signature (DCAP)          |
  |  - Check report_data matches challenge    |
  |  - Check cert hash in event_log matches   |
  |  - Verify event_log by replaying RTMRs    |
  |  - Verify TCB status is UpToDate          |
  |  - Verify runtime measurements            |
  |                                           |
  | 6. Regular HTTPS requests                 |
  |    (if verification passed)               |
  |<=========================================>|

Provenance Verification

For complete security, consider verifying the software supply chain before verifying the runtime environment. We recommend using docker-slsa to verify SLSA provenance of container images before deployment:

  • Provenance Verification (docker-slsa): Ensures images came from trusted sources and build pipelines
  • RATLS Verification (secureai): Ensures the runtime environment is a genuine TEE running expected code

See the docker-slsa documentation for usage details.

Server Requirements

For a server to support RATLS verification with SecureAI, it must:

  1. Run inside a TEE: Currently only Intel TDX is supported
  2. Maintain an event log: Record all significant operations including TLS certificate renewals with certificate hashes
  3. Provide a quote endpoint: Expose an HTTP POST endpoint (default: /tdx_quote) that:
    • Accepts JSON with report_data_hex field (64 bytes hex-encoded)
    • Returns a JSON response containing:
      • quote: TDX quote (hex-encoded) signed by TEE hardware
      • event_log: JSON array of events used to compute RTMRs
  4. Generate TLS certificates inside the TEE: Private keys must never leave the TEE
  5. Update RTMRs on certificate renewal: Append certificate hash events to the log, updating RTMR3

See the server implementation reference for a complete example.

Examples

You can set DEBUG_RATLS=true to see debug logs.

DstackTDXVerifier

DstackTDXVerifier is used to verify that a server is running inside a TDX TEE managed by Dstack. It verifies the full bootchain (MRTD, RTMR0-2), event log integrity, and application configuration.

from secureai import httpx
from secureai.verifiers import DstackTDXVerifier

# Option 1: Verify TEE with runtime verification disabled (NOT RECOMMENDED)
# Only verifies that the server is running in a TEE, but not the bootchain or what application it runs
verifier = DstackTDXVerifier(disable_runtime_verification=True)

# Option 2: Full verification with bootchain measurements and custom app_compose (RECOMMENDED)
# This verifies the full bootchain (firmware, kernel, initramfs), OS image, and application
with open("docker-compose.yml", "r") as f:
    docker_compose_content = f.read()

# Define your app_compose configuration
app_compose = {
    "docker_compose_file": docker_compose_content,
    "allowed_envs": ["MY_API_KEY", "MY_SECRET"],
    "features": ["kms", "tproxy-net"],
    # ... other app_compose settings
}

# Bootchain measurements depend on hardware configuration (CPU count, memory size, etc.)
# You must compute these values for your specific deployment
# See docs/dstack-bootchain-verification.md for instructions
verifier = DstackTDXVerifier(
    app_compose=app_compose,
    expected_bootchain={
        "mrtd": "f06dfda6...",   # Initial TD measurement (firmware)
        "rtmr0": "68102e7b...",  # Virtual hardware environment
        "rtmr1": "6e1afb74...",  # Linux kernel
        "rtmr2": "89e73ced...",  # Kernel cmdline + initramfs
    },
    os_image_hash="86b18137..."  # SHA256 of sha256sum.txt
)

# Option 3: Use default app_compose with overrides
# If you only need to customize docker_compose_file and/or allowed_envs,
# you can use the override parameters with the default app_compose
verifier = DstackTDXVerifier(
    app_compose_docker_compose_file=docker_compose_content,  # Override docker_compose_file
    app_compose_allowed_envs=["MY_API_KEY", "MY_SECRET"],    # Override allowed_envs
    expected_bootchain={
        "mrtd": "f06dfda6...",
        "rtmr0": "68102e7b...",
        "rtmr1": "6e1afb74...",
        "rtmr2": "89e73ced...",
    },
    os_image_hash="86b18137..."
)

# Use with httpx client
with httpx.Client(
    ratls_verifier_per_hostname={
        "your-tee-server.com": verifier
    }
) as client:
    response = client.get("https://your-tee-server.com/api")

See docs/dstack-bootchain-verification.md for detailed instructions on computing measurements for your CVM deployment.

Collateral Fetching

The verifier needs Intel collateral data to verify TDX quotes. By default, collateral is fetched automatically from Intel servers on the first verification and cached for subsequent calls within the same verifier instance. However, you can disable caching, or provide your own collateral that you fetched and verified yourself.

# Default behavior: fetch collateral from Intel and cache it (recommended)
verifier = DstackTDXVerifier(
    # ... other options
)

# Disable caching: fetch fresh collateral on every verification
verifier = DstackTDXVerifier(
    cache_collateral=False,
    # ... other options
)

# Provide custom collateral
verifier = DstackTDXVerifier(
    collateral={
        "tcb_info": "...",
        "tcb_info_issuer_chain": "...",
        "qe_identity": "...",
        # ... other collateral fields
    },
    # ... other options
)

OpenAI Client with RATLS

from secureai import OpenAI
from secureai.verifiers import DstackTDXVerifier

with open("your-docker-compose.yml", "r") as f:
    docker_compose_content = f.read()

verifier = DstackTDXVerifier(
    app_compose_docker_compose_file=docker_compose_content,
    expected_bootchain={
        "mrtd": "...",   # Your computed MRTD
        "rtmr0": "...",  # Your computed RTMR0
        "rtmr1": "...",  # Your computed RTMR1
        "rtmr2": "...",  # Your computed RTMR2
    },
    os_image_hash="..."  # Your computed OS image hash
)

client = OpenAI(ratls_verifier_per_hostname={"vllm.concrete-security.com": verifier})

HTTP Client with RATLS

from secureai import httpx
from secureai.verifiers import DstackTDXVerifier

with open("your-docker-compose.yml", "r") as f:
    docker_compose_content = f.read()

verifier = DstackTDXVerifier(
    app_compose_docker_compose_file=docker_compose_content,
    expected_bootchain={
        "mrtd": "...",   # Your computed MRTD
        "rtmr0": "...",  # Your computed RTMR0
        "rtmr1": "...",  # Your computed RTMR1
        "rtmr2": "...",  # Your computed RTMR2
    },
    os_image_hash="..."  # Your computed OS image hash
)

with httpx.Client(ratls_verifier_per_hostname={"vllm.concrete-security.com": verifier}) as client:
    # No RATLS as not in the list
    response = client.get("https://httpbin.org/get")
    print(f"Response status: {response.status_code}")

    # Uses RATLS
    response = client.get("https://vllm.concrete-security.com/health")
    print(f"Response status: {response.status_code}")

    # This shouldn't trigger another verification as the connection is still open
    response = client.get("https://vllm.concrete-security.com/v1/models")
    print(f"Response status: {response.status_code}")

Development

SecureAI uses uv for dependency management and building. There is also a Makefile with basic recipes.

Running Tests

# Run all tests
uv run pytest

or

make test # or test-coverage

Code Quality

# Format code
uv run ruff format

# Lint code
uv run ruff check

# For import order specifically
uv run ruff check --select I

or

make qa-all # or qa-all-fix

Build

# Build a wheel from source
uv build

Hardware Support

Only TDX is supported at the moment.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

secureai-0.3.0.tar.gz (15.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

secureai-0.3.0-py3-none-any.whl (20.2 kB view details)

Uploaded Python 3

File details

Details for the file secureai-0.3.0.tar.gz.

File metadata

  • Download URL: secureai-0.3.0.tar.gz
  • Upload date:
  • Size: 15.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for secureai-0.3.0.tar.gz
Algorithm Hash digest
SHA256 00271a27c640eeb61e3c3960a6fb2704f962729e14faec2e73f6051da3fdaa76
MD5 a6e45b75997fddf0271460c267fea80b
BLAKE2b-256 d21fd0276ff60e75591c358ee5a638d7a5ecd90b1ef1ef24c172e17eec0d8435

See more details on using hashes here.

Provenance

The following attestation bundles were made for secureai-0.3.0.tar.gz:

Publisher: publish.yml on concrete-security/secureai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file secureai-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: secureai-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 20.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for secureai-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8a40fac5de7eb283975985ed628878c68628040cab5e253d6025ea5f942864c9
MD5 5519bbfb83c111a7d08989ec7b2de3de
BLAKE2b-256 af298c5dd363afa2060776044be91273393851fd11d4e12f691acd35d0a978ab

See more details on using hashes here.

Provenance

The following attestation bundles were made for secureai-0.3.0-py3-none-any.whl:

Publisher: publish.yml on concrete-security/secureai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page