Skip to main content

Official Python SDK for FHEnom for AI™ - Confidential AI with fully encrypted models and data

Project description

FHEnom AI Python Client Library

Official Python SDK for FHEnom for AI™ - Confidential AI with fully encrypted models and data.

Python 3.8+ License: MIT

🚀 Quick Start

Installation

pip install fhenomai

Or install from source:

CLI Configuration

First, configure the CLI with your TEE server details:

# Initialize configuration (interactive)
fhenomai config init \
  --admin-host YOUR_TEE_IP \
  --admin-port 9099 \
  --user-host YOUR_TEE_IP \
  --user-port 9999 \
  --sftp-host YOUR_TEE_IP \
  --sftp-username admin \
  --sftp-password YOUR_PASSWORD

# Verify configuration
fhenomai config show

# Test connectivity
fhenomai test connection

Basic CLI Usage

# List models
fhenomai model list --show-status

# Upload model via SFTP (upload/ prefix added automatically)
fhenomai sftp upload ./my-model my-model --recursive

# Encrypt model (paths normalized automatically)
fhenomai model encrypt my-model my-model-encrypted \
  --encrypted-model-id my-model-encrypted \
  --wait --show-progress

# Download encrypted model (download/ prefix added automatically)
fhenomai sftp download my-model-encrypted ./encrypted/my-model --recursive

# Start serving
fhenomai serve start my-model-encrypted \
  --server-url http://YOUR_VLLM_SERVER_IP:8000 \
  --display-model-name my-model

# Stop serving
fhenomai serve stop my-model-encrypted

Basic Python SDK Usage

from fhenomai import FHEnomClient, FHEnomConfig

# Load configuration from file
config = FHEnomConfig.from_file()  # Reads from ~/.fhenomai/config.yaml

# Initialize client
client = FHEnomClient(config)

# List available models
models = client.admin.list_models()
print(f"Available models: {models}")

# Encrypt a model (paths auto-prefixed with /models/upload/ and /models/download/)
job_id = client.admin.encrypt_model(
    model_name_or_path="llama-3-8b",  # Becomes /models/upload/llama-3-8b
    out_encrypted_model_path="llama-3-8b-encrypted",  # Becomes /models/download/llama-3-8b-encrypted
    encrypted_model_id="llama-3-8b-encrypted"
)

# Wait for completion
result = client.admin.wait_for_job(job_id, timeout=3600)

# Start serving
client.admin.start_serving(
    encrypted_model_id="llama-3-8b-encrypted",
    server_url="http://YOUR_VLLM_SERVER_IP:8000",  # vLLM server IP/hostname
    display_model_name="llama-3-8b-instruct"  # Optional: for vLLM --served-model-name
)

📚 Features

Core Capabilities

  • CLI Tool: Full-featured command-line interface for all operations
  • Python SDK: Programmatic access via FHEnomClient and AdminAPI
  • Model Encryption: Encrypt models on TEE server with progress tracking
  • Dataset Encryption: Encrypt datasets using encrypted models
  • SFTP Integration: Upload/download with automatic path normalization
  • Job Monitoring: Real-time progress updates and status checking
  • Serving Control: Start/stop model serving with vLLM integration

CLI Commands

  • config: init, show, validate, test
  • model: list, encrypt, encrypt-dataset, info, upload, download, delete
  • serve: start, stop, list
  • sftp: upload, download, list, clear
  • job: status, wait
  • health: check, admin, sftp
  • test: connection, admin, sftp

Advanced Features

  • Progress Bars: Rich terminal UI with real-time progress
  • Auto Path Normalization: Automatic upload/ and download/ prefix handling
  • Duplicate Detection: Warns about existing model names
  • Directory Management: Bulk operations on TEE directories
  • Health Monitoring: Test connectivity to all services
  • Context Manager: Automatic resource cleanup
  • TEE Attestation: Generate and verify TEE attestation reports with built-in verification

TEE Attestation Support (v1.0.7)

!!! info "New in v1.0.7" Enhanced attestation with automatic file management, format inference, and built-in verification. Report formatting is now integrated into fhenomai for stability.

FHEnom AI includes integrated TEE attestation with AMD SEV-SNP and Intel TDX support:

# Install fhenomai (includes dk-tee-attestation for verification)
pip install fhenomai

# Generate attestation report (creates 3 files)
fhenomai admin attestation --output report.html
# Creates: report.html, report.bin, report.nonce

# Verify attestation (nonce auto-loads from report.nonce)
fhenomai admin verify-attestation --report report.bin

# Generate detailed PDF with hex dump
fhenomai admin attestation --format detailed --output analysis.pdf

# Verify with detailed output
fhenomai admin verify-attestation --report report.bin --format detailed

What's New in v1.0.7:

  • Triple file output: All attestation commands create .html/.pdf/.txt + .bin + .nonce
  • Format inference: File extension determines output type (.html, .pdf, .txt)
  • Changed --format behavior: Now controls display style (standard/detailed) not output type
  • Auto-load nonce: Verification automatically loads .nonce file if not provided
  • Built-in verification: New verify-attestation command with color-coded output
  • Parsed reports: CPU info, TCB details, and signatures cleanly displayed
  • Integrated formatter: Report formatting moved from dk-tee-attestation to fhenomai for API stability

Python SDK usage:

from fhenomai import FHEnomClient, AttestationReportFormatter

client = FHEnomClient.from_config()

# Generate attestation (nonce auto-generated)
report = client.admin.attestation()

# Save report
with open("report.bin", "wb") as f:
    f.write(report)

# Verify attestation
result = client.admin.verify_attestation(
    report=report,
    engine_type="amd_sev_snp"
)

if result['verified']:
    print(f"✓ Verified - Platform: {result['platform']}")
    print(f"  CPU: {result['cpu_info']}")

# Use the formatter directly for custom output
formatter = AttestationReportFormatter()
html_report = formatter.format_html(report)
with open("custom_report.html", "w") as f:
    f.write(html_report)

Verification Features:

  • ✅ ECDSA P-384 signature validation
  • ✅ Nonce binding verification
  • ✅ TCB (Trusted Computing Base) parsing
  • ✅ CPU identification
  • ✅ Color-coded hex dumps
  • ✅ HTML/PDF report generation
  • ✅ Platform detection (AMD SEV-SNP, Intel TDX)

📖 Documentation

Admin API Operations

# Model discovery
models = client.admin.list_models()
online_models = client.admin.list_online_models()
model_info = client.admin.get_model_info(model_id)

# Model encryption (paths auto-normalized)
job_id = client.admin.encrypt_model(
    model_name_or_path="model-name",  # Auto-prefixed with /models/upload/
    out_encrypted_model_path="model-name-encrypted",  # Auto-prefixed with /models/download/
    encrypted_model_id="model-name-encrypted",  # Custom model ID
    encryption_impl="decoder-only-llm",
    dtype="bfloat16",
    server_ip="fhenom_ai_server",
    server_port=9100
)

# Dataset encryption (paths auto-normalized)
dataset_job = client.admin.encrypt_dataset(
    encrypted_model_id="my-encrypted-model",
    dataset_name_or_path="my-dataset",  # Auto-prefixed with /models/upload/
    out_encrypted_dataset_path="my-dataset-encrypted",  # Auto-prefixed with /models/download/
    dataset_encryption_impl="numeric",
    text_fields=["text"],
    server_ip="fhenom_ai_server",
    server_port=9100
)

# Serving control
client.admin.start_serving(
    encrypted_model_id=model_id,
    server_url="http://YOUR_VLLM_SERVER_IP:8000",  # vLLM server IP/hostname
    api_key=None,  # Optional
    display_model_name="my-model"  # Optional: custom name for vLLM
)
client.admin.stop_serving(model_id)

# Job management
status = client.admin.get_job_status(job_id)
result = client.admin.wait_for_job(
    job_id, 
    poll_interval=5, 
    timeout=3600,
    callback=lambda s: print(f"Progress: {s.get('progress', 0)*100:.1f}%")
)

SFTP Operations

# Get SFTP manager
sftp = client.get_sftp_manager()

# Upload model (upload/ prefix added automatically)
sftp.upload_directory(
    local_path="./llama-3-8b",
    remote_path="llama-3-8b"  # Becomes upload/llama-3-8b
)

# Download encrypted model (download/ prefix added automatically)
sftp.download_directory(
    remote_path="llama-3-8b-encrypted",  # Becomes download/llama-3-8b-encrypted
    local_path="./encrypted/llama-3-8b"
)

# List files in upload directory
files = sftp.list_upload_directory()
for file in files:
    print(f"{file.name}: {file.size_mb:.2f} MB")

# Clear download directory
sftp.clear_download_directory()

# Get directory size
size_gb = sftp.get_directory_size("upload")
print(f"Upload directory: {size_gb:.2f} GB")

# Check if file exists (via Admin API's SFTP manager)
exists = client.admin.sftp.file_exists("upload/my-model/config.json")

Health & Testing

# Test connectivity (via CLI)
# fhenomai health check
# fhenomai test connection

# In Python - test admin API
try:
    models = client.admin.list_models()
    print(f"✓ Admin API connected ({len(models)} models)")
except Exception as e:
    print(f"✗ Admin API failed: {e}")

# Test SFTP connection
try:
    sftp = client.get_sftp_manager()
    files = sftp.list_upload_directory()
    print(f"✓ SFTP connected ({len(files)} files in upload/)")
except Exception as e:
    print(f"✗ SFTP failed: {e}")

User Inference (via OpenAI SDK)

For inference, use the standard OpenAI Python SDK:

from openai import OpenAI

# Connect to FHEnom User API (port 9999)
client = OpenAI(
    base_url="http://your-tee-ip:9999/v1",
    api_key="not-needed"  # TEE doesn't require API key
)

# Standard OpenAI-compatible inference
response = client.chat.completions.create(
    model="your-model-name",
    messages=[
        {"role": "user", "content": "Explain quantum computing"}
    ],
    max_tokens=200
)

print(response.choices[0].message.content)

🛠️ Advanced Usage

Context Manager Usage

from fhenomai import FHEnomClient, FHEnomConfig

# Load config
config = FHEnomConfig.from_file()

# Context manager handles connection lifecycle
with FHEnomClient(config) as client:
    # SFTP connection auto-managed
    sftp = client.get_sftp_manager()
    
    # Upload model (upload/ prefix added automatically)
    sftp.upload_directory("./model", "model")
    
    # Encrypt (paths auto-normalized)
    job_id = client.admin.encrypt_model(
        model_name_or_path="model",
        out_encrypted_model_path="model-enc",
        encrypted_model_id="model-enc"
    )
    
    # Wait for completion
    result = client.admin.wait_for_job(job_id)
    
    if result.get('status') == 'done':
        # Download encrypted model (download/ prefix added automatically)
        sftp.download_directory(
            "model-enc",
            "./encrypted/model"
        )
# Connection automatically closed

Job Monitoring with Callbacks

import time

# Encrypt with progress callback (paths auto-normalized)
job_id = client.admin.encrypt_model(
    model_name_or_path="large-model",
    out_encrypted_model_path="large-model-enc",
    encrypted_model_id="large-model-enc"
)

# Define callback for progress updates
def progress_callback(status):
    progress = status.get('progress', 0) * 100
    message = status.get('message', 'Processing')
    print(f"\r{message}: {progress:.1f}%", end='', flush=True)

# Wait with callback
result = client.admin.wait_for_job(
    job_id,
    timeout=3600,
    poll_interval=5,
    callback=progress_callback
)

print(f"\nCompleted: {result.get('status')}")

## 📋 Configuration

### Configuration File

Create `~/.fhenomai/config.yaml`:

```yaml
# Admin API Configuration
admin:
  host: "your-tee-ip"
  port: 9099
  url: "http://your-tee-ip:9099"  # Alternative to host+port

# User API Configuration (for inference)
user:
  host: "your-tee-ip"
  port: 9999
  url: "http://your-tee-ip:9999/v1"  # Alternative to host+port

# SFTP Configuration
sftp:
  host: "your-tee-ip"
  port: 22
  username: "admin"
  password: "your-password"  # Or use key_path
  # key_path: "~/.ssh/id_rsa"  # Alternative to password
  base_path: "/var/lib/fhenomai/FHEnomAI-server/admin"  # Optional

# Optional settings
timeout: 30
max_retries: 3
verify_ssl: true
auth_token: "default-auth-token-2026"  # X-Auth-Token header

Environment Variables

export FHENOM_ADMIN_HOST="your-tee-ip"
export FHENOM_ADMIN_PORT="9099"
export FHENOM_SFTP_HOST="your-tee-ip"
export FHENOM_SFTP_USERNAME="admin"
export FHENOM_SFTP_PASSWORD="your-password"

Then use without parameters:

from fhenomai import FHEnomClient, FHEnomConfig

# Load from environment
config = FHEnomConfig.from_env()
client = FHEnomClient(config)

# Or load from file
config = FHEnomConfig.from_file()  # Reads ~/.fhenomai/config.yaml
client = FHEnomClient(config)

🔧 API Reference

FHEnomClient

Main client class for FHEnom AI operations.

Key Methods:

  • admin - Access AdminAPI instance for model/serving operations
  • get_sftp_manager() - Get SFTPManager for file operations
  • Context manager support with __enter__ and __exit__

AdminAPI

Admin operations (accessible via client.admin):

Model Operations:

  • list_models() - List all encrypted models
  • list_online_models() - List currently served models
  • get_model_info(model_id) - Get model details
  • encrypt_model(...) - Encrypt a plaintext model
  • encrypt_dataset(...) - Encrypt a dataset

Serving Operations:

  • start_serving(encrypted_model_id, server_url, ...) - Start serving
  • stop_serving(encrypted_model_id) - Stop serving

Job Operations:

  • get_job_status(job_id) - Check job status
  • wait_for_job(job_id, timeout, callback) - Wait for completion

SFTP Operations (via admin.sftp):

  • Access to SFTPManager for TEE directory operations

SFTPManager

High-level SFTP operations (accessible via client.get_sftp_manager() or client.admin.sftp):

Directory Operations:

  • upload_directory(local_path, remote_path) - Upload directory
  • download_directory(remote_path, local_path) - Download directory
  • list_upload_directory() - List files in upload/
  • list_download_directory() - List files in download/
  • clear_upload_directory() - Clear upload directory
  • clear_download_directory() - Clear download directory

File Operations:

  • upload_file(local_file, remote_file) - Upload single file
  • download_file(remote_file, local_file) - Download single file
  • file_exists(remote_path) - Check if file exists
  • get_directory_size(directory) - Get size in GB

🤝 Contributing

Contributions are welcome! Please contact DataKrypto for contribution guidelines.

📄 License

This project is licensed under the MIT License - see LICENSE file.

🔗 Links

📞 Contact

DataKrypto

United States
533 Airport Blvd. Ste 400
Burlingame, CA 94010
+1 (650) 373-2083

Italy
Via Marche, 54
00187 Rome - Italy
+39 (06) 88923849


© 2026 DataKrypto. All rights reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fhenomai-1.0.16.tar.gz (68.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fhenomai-1.0.16-py3-none-any.whl (62.9 kB view details)

Uploaded Python 3

File details

Details for the file fhenomai-1.0.16.tar.gz.

File metadata

  • Download URL: fhenomai-1.0.16.tar.gz
  • Upload date:
  • Size: 68.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for fhenomai-1.0.16.tar.gz
Algorithm Hash digest
SHA256 93851c34da904375aa4fab525a7217499431569f41d6206d98517ca9d8a25f17
MD5 76bf2dcc4e1c559f91dcd31d3aaf08d6
BLAKE2b-256 2b8a69f7a38a1957e419dde49b94605a9542be722856d9afc02689f67bcdb594

See more details on using hashes here.

File details

Details for the file fhenomai-1.0.16-py3-none-any.whl.

File metadata

  • Download URL: fhenomai-1.0.16-py3-none-any.whl
  • Upload date:
  • Size: 62.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for fhenomai-1.0.16-py3-none-any.whl
Algorithm Hash digest
SHA256 2c6b8b5ab95716fb89a7c11de2fc61f17166c4a3f23af839ec25dfd2d6fa8733
MD5 878c414ca89ccf555be03b084f6c70da
BLAKE2b-256 a3f4a6d86d372b52306c0cebcc8d280f9f0ce59e98189534a85baeb05fa7d627

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page