Skip to main content

Official Python SDK for FHEnom for AI™ - Confidential AI with fully encrypted models and data

Project description

FHEnom AI Python Client Library

Official Python SDK for FHEnom for AI™ - Confidential AI with fully encrypted models and data.

Python 3.8+ License: MIT

🚀 Quick Start

Installation

pip install fhenomai

Or install from source:

CLI Configuration

First, configure the CLI with your TEE server details:

# Initialize configuration (interactive)
fhenomai config init \
  --admin-host YOUR_TEE_IP \
  --admin-port 9099 \
  --user-host YOUR_TEE_IP \
  --user-port 9999 \
  --sftp-host YOUR_TEE_IP \
  --sftp-username admin \
  --sftp-password YOUR_PASSWORD

# Verify configuration
fhenomai config show

# Test connectivity
fhenomai test connection

Basic CLI Usage

# List models
fhenomai model list --show-status

# Upload model via SFTP (upload/ prefix added automatically)
fhenomai sftp upload ./my-model my-model --recursive

# Encrypt model (paths normalized automatically)
fhenomai model encrypt my-model my-model-encrypted \
  --encrypted-model-id my-model-encrypted \
  --wait --show-progress

# Download encrypted model (download/ prefix added automatically)
fhenomai sftp download my-model-encrypted ./encrypted/my-model --recursive

# Start serving
fhenomai serve start my-model-encrypted \
  --server-url http://YOUR_VLLM_SERVER_IP:8000 \
  --display-model-name my-model

# Stop serving
fhenomai serve stop my-model-encrypted

Basic Python SDK Usage

from fhenomai import FHEnomClient, FHEnomConfig

# Load configuration from file
config = FHEnomConfig.from_file()  # Reads from ~/.fhenomai/config.yaml

# Initialize client
client = FHEnomClient(config)

# List available models
models = client.admin.list_models()
print(f"Available models: {models}")

# Encrypt a model (paths auto-prefixed with /models/upload/ and /models/download/)
job_id = client.admin.encrypt_model(
    model_name_or_path="llama-3-8b",  # Becomes /models/upload/llama-3-8b
    out_encrypted_model_path="llama-3-8b-encrypted",  # Becomes /models/download/llama-3-8b-encrypted
    encrypted_model_id="llama-3-8b-encrypted"
)

# Wait for completion
result = client.admin.wait_for_job(job_id, timeout=3600)

# Start serving
client.admin.start_serving(
    encrypted_model_id="llama-3-8b-encrypted",
    server_url="http://YOUR_VLLM_SERVER_IP:8000",  # vLLM server IP/hostname
    display_model_name="llama-3-8b-instruct"  # Optional: for vLLM --served-model-name
)

📚 Features

Core Capabilities

  • CLI Tool: Full-featured command-line interface for all operations
  • Python SDK: Programmatic access via FHEnomClient and AdminAPI
  • Model Encryption: Encrypt models on TEE server with progress tracking
  • Dataset Encryption: Encrypt datasets using encrypted models
  • SFTP Integration: Upload/download with automatic path normalization
  • Job Monitoring: Real-time progress updates and status checking
  • Serving Control: Start/stop model serving with vLLM integration

CLI Commands

  • config: init, show, validate, test
  • model: list, encrypt, encrypt-dataset, info, upload, download, delete
  • serve: start, stop, list
  • sftp: upload, download, list, clear
  • job: status, wait
  • health: check, admin, sftp
  • test: connection, admin, sftp

Advanced Features

  • Progress Bars: Rich terminal UI with real-time progress
  • Auto Path Normalization: Automatic upload/ and download/ prefix handling
  • Duplicate Detection: Warns about existing model names
  • Directory Management: Bulk operations on TEE directories
  • Health Monitoring: Test connectivity to all services
  • Context Manager: Automatic resource cleanup
  • TEE Attestation: Generate and verify TEE attestation reports (optional)

TEE Attestation Support

FHEnom AI includes integrated TEE attestation verification for AMD SEV-SNP and Intel TDX platforms:

# Install fhenomai (includes attestation)
pip install fhenomai

# Or install only the attestation library
pip install dk-tee-attestation

# Generate attestation report
fhenomai admin attestation --output report.bin --show-nonce

# Verify attestation report
fhenomai admin verify-attestation --report report.bin --nonce $NONCE

Python SDK usage:

from fhenomai import FHEnomClient
from fhenomai.utils import generate_nonce, format_nonce_hex

client = FHEnomClient.from_config()

# Generate attestation
nonce = generate_nonce(64)
nonce_hex = format_nonce_hex(nonce)
report = client.admin.attestation(nonce_hex)

# Verify attestation
result = client.admin.verify_attestation(report, nonce_hex, engine_type="amd_sev_snp")
if result['verified']:
    print("✓ Attestation verified!")

The attestation verification:

  • Validates report signatures using platform certificates
  • Verifies AMD KDS certificate chain (for SEV-SNP)
  • Confirms nonce binding to the report
  • Ensures metadata consistency

📖 Documentation

Admin API Operations

# Model discovery
models = client.admin.list_models()
online_models = client.admin.list_online_models()
model_info = client.admin.get_model_info(model_id)

# Model encryption (paths auto-normalized)
job_id = client.admin.encrypt_model(
    model_name_or_path="model-name",  # Auto-prefixed with /models/upload/
    out_encrypted_model_path="model-name-encrypted",  # Auto-prefixed with /models/download/
    encrypted_model_id="model-name-encrypted",  # Custom model ID
    encryption_impl="decoder-only-llm",
    dtype="bfloat16",
    server_ip="fhenom_ai_server",
    server_port=9100
)

# Dataset encryption (paths auto-normalized)
dataset_job = client.admin.encrypt_dataset(
    encrypted_model_id="my-encrypted-model",
    dataset_name_or_path="my-dataset",  # Auto-prefixed with /models/upload/
    out_encrypted_dataset_path="my-dataset-encrypted",  # Auto-prefixed with /models/download/
    dataset_encryption_impl="numeric",
    text_fields=["text"],
    server_ip="fhenom_ai_server",
    server_port=9100
)

# Serving control
client.admin.start_serving(
    encrypted_model_id=model_id,
    server_url="http://YOUR_VLLM_SERVER_IP:8000",  # vLLM server IP/hostname
    api_key=None,  # Optional
    display_model_name="my-model"  # Optional: custom name for vLLM
)
client.admin.stop_serving(model_id)

# Job management
status = client.admin.get_job_status(job_id)
result = client.admin.wait_for_job(
    job_id, 
    poll_interval=5, 
    timeout=3600,
    callback=lambda s: print(f"Progress: {s.get('progress', 0)*100:.1f}%")
)

SFTP Operations

# Get SFTP manager
sftp = client.get_sftp_manager()

# Upload model (upload/ prefix added automatically)
sftp.upload_directory(
    local_path="./llama-3-8b",
    remote_path="llama-3-8b"  # Becomes upload/llama-3-8b
)

# Download encrypted model (download/ prefix added automatically)
sftp.download_directory(
    remote_path="llama-3-8b-encrypted",  # Becomes download/llama-3-8b-encrypted
    local_path="./encrypted/llama-3-8b"
)

# List files in upload directory
files = sftp.list_upload_directory()
for file in files:
    print(f"{file.name}: {file.size_mb:.2f} MB")

# Clear download directory
sftp.clear_download_directory()

# Get directory size
size_gb = sftp.get_directory_size("upload")
print(f"Upload directory: {size_gb:.2f} GB")

# Check if file exists (via Admin API's SFTP manager)
exists = client.admin.sftp.file_exists("upload/my-model/config.json")

Health & Testing

# Test connectivity (via CLI)
# fhenomai health check
# fhenomai test connection

# In Python - test admin API
try:
    models = client.admin.list_models()
    print(f"✓ Admin API connected ({len(models)} models)")
except Exception as e:
    print(f"✗ Admin API failed: {e}")

# Test SFTP connection
try:
    sftp = client.get_sftp_manager()
    files = sftp.list_upload_directory()
    print(f"✓ SFTP connected ({len(files)} files in upload/)")
except Exception as e:
    print(f"✗ SFTP failed: {e}")

User Inference (via OpenAI SDK)

For inference, use the standard OpenAI Python SDK:

from openai import OpenAI

# Connect to FHEnom User API (port 9999)
client = OpenAI(
    base_url="http://your-tee-ip:9999/v1",
    api_key="not-needed"  # TEE doesn't require API key
)

# Standard OpenAI-compatible inference
response = client.chat.completions.create(
    model="your-model-name",
    messages=[
        {"role": "user", "content": "Explain quantum computing"}
    ],
    max_tokens=200
)

print(response.choices[0].message.content)

🛠️ Advanced Usage

Context Manager Usage

from fhenomai import FHEnomClient, FHEnomConfig

# Load config
config = FHEnomConfig.from_file()

# Context manager handles connection lifecycle
with FHEnomClient(config) as client:
    # SFTP connection auto-managed
    sftp = client.get_sftp_manager()
    
    # Upload model (upload/ prefix added automatically)
    sftp.upload_directory("./model", "model")
    
    # Encrypt (paths auto-normalized)
    job_id = client.admin.encrypt_model(
        model_name_or_path="model",
        out_encrypted_model_path="model-enc",
        encrypted_model_id="model-enc"
    )
    
    # Wait for completion
    result = client.admin.wait_for_job(job_id)
    
    if result.get('status') == 'done':
        # Download encrypted model (download/ prefix added automatically)
        sftp.download_directory(
            "model-enc",
            "./encrypted/model"
        )
# Connection automatically closed

Job Monitoring with Callbacks

import time

# Encrypt with progress callback (paths auto-normalized)
job_id = client.admin.encrypt_model(
    model_name_or_path="large-model",
    out_encrypted_model_path="large-model-enc",
    encrypted_model_id="large-model-enc"
)

# Define callback for progress updates
def progress_callback(status):
    progress = status.get('progress', 0) * 100
    message = status.get('message', 'Processing')
    print(f"\r{message}: {progress:.1f}%", end='', flush=True)

# Wait with callback
result = client.admin.wait_for_job(
    job_id,
    timeout=3600,
    poll_interval=5,
    callback=progress_callback
)

print(f"\nCompleted: {result.get('status')}")

## 📋 Configuration

### Configuration File

Create `~/.fhenomai/config.yaml`:

```yaml
# Admin API Configuration
admin:
  host: "your-tee-ip"
  port: 9099
  url: "http://your-tee-ip:9099"  # Alternative to host+port

# User API Configuration (for inference)
user:
  host: "your-tee-ip"
  port: 9999
  url: "http://your-tee-ip:9999/v1"  # Alternative to host+port

# SFTP Configuration
sftp:
  host: "your-tee-ip"
  port: 22
  username: "admin"
  password: "your-password"  # Or use key_path
  # key_path: "~/.ssh/id_rsa"  # Alternative to password
  base_path: "/var/lib/fhenomai/FHEnomAI-server/admin"  # Optional

# Optional settings
timeout: 30
max_retries: 3
verify_ssl: true
auth_token: "default-auth-token-2026"  # X-Auth-Token header

Environment Variables

export FHENOM_ADMIN_HOST="your-tee-ip"
export FHENOM_ADMIN_PORT="9099"
export FHENOM_SFTP_HOST="your-tee-ip"
export FHENOM_SFTP_USERNAME="admin"
export FHENOM_SFTP_PASSWORD="your-password"

Then use without parameters:

from fhenomai import FHEnomClient, FHEnomConfig

# Load from environment
config = FHEnomConfig.from_env()
client = FHEnomClient(config)

# Or load from file
config = FHEnomConfig.from_file()  # Reads ~/.fhenomai/config.yaml
client = FHEnomClient(config)

🔧 API Reference

FHEnomClient

Main client class for FHEnom AI operations.

Key Methods:

  • admin - Access AdminAPI instance for model/serving operations
  • get_sftp_manager() - Get SFTPManager for file operations
  • Context manager support with __enter__ and __exit__

AdminAPI

Admin operations (accessible via client.admin):

Model Operations:

  • list_models() - List all encrypted models
  • list_online_models() - List currently served models
  • get_model_info(model_id) - Get model details
  • encrypt_model(...) - Encrypt a plaintext model
  • encrypt_dataset(...) - Encrypt a dataset

Serving Operations:

  • start_serving(encrypted_model_id, server_url, ...) - Start serving
  • stop_serving(encrypted_model_id) - Stop serving

Job Operations:

  • get_job_status(job_id) - Check job status
  • wait_for_job(job_id, timeout, callback) - Wait for completion

SFTP Operations (via admin.sftp):

  • Access to SFTPManager for TEE directory operations

SFTPManager

High-level SFTP operations (accessible via client.get_sftp_manager() or client.admin.sftp):

Directory Operations:

  • upload_directory(local_path, remote_path) - Upload directory
  • download_directory(remote_path, local_path) - Download directory
  • list_upload_directory() - List files in upload/
  • list_download_directory() - List files in download/
  • clear_upload_directory() - Clear upload directory
  • clear_download_directory() - Clear download directory

File Operations:

  • upload_file(local_file, remote_file) - Upload single file
  • download_file(remote_file, local_file) - Download single file
  • file_exists(remote_path) - Check if file exists
  • get_directory_size(directory) - Get size in GB

🤝 Contributing

Contributions are welcome! Please contact DataKrypto for contribution guidelines.

📄 License

This project is licensed under the MIT License - see LICENSE file.

🔗 Links

📞 Contact

DataKrypto

United States
533 Airport Blvd. Ste 400
Burlingame, CA 94010
+1 (650) 373-2083

Italy
Via Marche, 54
00187 Rome - Italy
+39 (06) 88923849


© 2026 DataKrypto. All rights reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fhenomai-1.0.5.tar.gz (60.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fhenomai-1.0.5-py3-none-any.whl (56.2 kB view details)

Uploaded Python 3

File details

Details for the file fhenomai-1.0.5.tar.gz.

File metadata

  • Download URL: fhenomai-1.0.5.tar.gz
  • Upload date:
  • Size: 60.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for fhenomai-1.0.5.tar.gz
Algorithm Hash digest
SHA256 4d07f776a193984809939ad6c98052b155785140a58b2d7367d5c24e7e262306
MD5 6b75d7082eee3661df7c2d88b9b4ccb5
BLAKE2b-256 d871fd4f2b7d2344f251de1781324491317d02619df57dc8d0fc945c42611823

See more details on using hashes here.

File details

Details for the file fhenomai-1.0.5-py3-none-any.whl.

File metadata

  • Download URL: fhenomai-1.0.5-py3-none-any.whl
  • Upload date:
  • Size: 56.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for fhenomai-1.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 7b32ca990a94a1ca15ab20fe371e45dcb3c034d4fc370cb39f61fe34e85d36e0
MD5 b28cea6995f985a1b63ee6cc294d731a
BLAKE2b-256 7cd7b9f434904d79a22042f949795848fb51d8312598b906359ba7df7c94535b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page