Official Python SDK for FHEnom for AI™ - Confidential AI with fully encrypted models and data
Project description
FHEnom AI Python Client Library
Official Python SDK for FHEnom for AI™ - Confidential AI with fully encrypted models and data.
🚀 Quick Start
Installation
pip install fhenomai
Or install from source:
CLI Configuration
First, configure the CLI with your TEE server details:
# Initialize configuration (interactive)
fhenomai config init \
--admin-host YOUR_TEE_IP \
--admin-port 9099 \
--user-host YOUR_TEE_IP \
--user-port 9999 \
--sftp-host YOUR_TEE_IP \
--sftp-username admin \
--sftp-password YOUR_PASSWORD
# Verify configuration
fhenomai config show
# Test connectivity
fhenomai test connection
Basic CLI Usage
# List models
fhenomai model list --show-status
# Upload model via SFTP (upload/ prefix added automatically)
fhenomai sftp upload ./my-model my-model --recursive
# Encrypt model (paths normalized automatically)
fhenomai model encrypt my-model my-model-encrypted \
--encrypted-model-id my-model-encrypted \
--wait --show-progress
# Download encrypted model (download/ prefix added automatically)
fhenomai sftp download my-model-encrypted ./encrypted/my-model --recursive
# Start serving
fhenomai serve start my-model-encrypted \
--server-url http://YOUR_VLLM_SERVER_IP:8000 \
--display-model-name my-model
# Stop serving
fhenomai serve stop my-model-encrypted
Basic Python SDK Usage
from fhenomai import FHEnomClient, FHEnomConfig
# Load configuration from file
config = FHEnomConfig.from_file() # Reads from ~/.fhenomai/config.yaml
# Initialize client
client = FHEnomClient(config)
# List available models
models = client.admin.list_models()
print(f"Available models: {models}")
# Encrypt a model (paths auto-prefixed with /models/upload/ and /models/download/)
job_id = client.admin.encrypt_model(
model_name_or_path="llama-3-8b", # Becomes /models/upload/llama-3-8b
out_encrypted_model_path="llama-3-8b-encrypted", # Becomes /models/download/llama-3-8b-encrypted
encrypted_model_id="llama-3-8b-encrypted"
)
# Wait for completion
result = client.admin.wait_for_job(job_id, timeout=3600)
# Start serving
client.admin.start_serving(
encrypted_model_id="llama-3-8b-encrypted",
server_url="http://YOUR_VLLM_SERVER_IP:8000", # vLLM server IP/hostname
display_model_name="llama-3-8b-instruct" # Optional: for vLLM --served-model-name
)
📚 Features
Core Capabilities
- CLI Tool: Full-featured command-line interface for all operations
- Python SDK: Programmatic access via
FHEnomClientandAdminAPI - Model Encryption: Encrypt models on TEE server with progress tracking
- Dataset Encryption: Encrypt datasets using encrypted models
- SFTP Integration: Upload/download with automatic path normalization
- Job Monitoring: Real-time progress updates and status checking
- Serving Control: Start/stop model serving with vLLM integration
CLI Commands
- config:
init,show,validate,test - model:
list,encrypt,encrypt-dataset,info,upload,download,delete - serve:
start,stop,list - sftp:
upload,download,list,clear - job:
status,wait - health:
check,admin,sftp - test:
connection,admin,sftp
Advanced Features
- Progress Bars: Rich terminal UI with real-time progress
- Auto Path Normalization: Automatic
upload/anddownload/prefix handling - Duplicate Detection: Warns about existing model names
- Directory Management: Bulk operations on TEE directories
- Health Monitoring: Test connectivity to all services
- Context Manager: Automatic resource cleanup
- TEE Attestation: Generate and verify TEE attestation reports with built-in verification
TEE Attestation Support (v1.0.7)
!!! info "New in v1.0.7" Enhanced attestation with automatic file management, format inference, and built-in verification. Report formatting is now integrated into fhenomai for stability.
FHEnom AI includes integrated TEE attestation with AMD SEV-SNP and Intel TDX support:
# Install fhenomai (includes dk-tee-attestation for verification)
pip install fhenomai
# Generate attestation report (creates 3 files)
fhenomai admin attestation --output report.html
# Creates: report.html, report.bin, report.nonce
# Verify attestation (nonce auto-loads from report.nonce)
fhenomai admin verify-attestation --report report.bin
# Generate detailed PDF with hex dump
fhenomai admin attestation --format detailed --output analysis.pdf
# Verify with detailed output
fhenomai admin verify-attestation --report report.bin --format detailed
What's New in v1.0.7:
- ✨ Triple file output: All attestation commands create .html/.pdf/.txt + .bin + .nonce
- ✨ Format inference: File extension determines output type (.html, .pdf, .txt)
- ✨ Changed
--formatbehavior: Now controls display style (standard/detailed) not output type - ✨ Auto-load nonce: Verification automatically loads .nonce file if not provided
- ✨ Built-in verification: New
verify-attestationcommand with color-coded output - ✨ Parsed reports: CPU info, TCB details, and signatures cleanly displayed
- ✨ Integrated formatter: Report formatting moved from dk-tee-attestation to fhenomai for API stability
Python SDK usage:
from fhenomai import FHEnomClient, AttestationReportFormatter
client = FHEnomClient.from_config()
# Generate attestation (nonce auto-generated)
report = client.admin.attestation()
# Save report
with open("report.bin", "wb") as f:
f.write(report)
# Verify attestation
result = client.admin.verify_attestation(
report=report,
engine_type="amd_sev_snp"
)
if result['verified']:
print(f"✓ Verified - Platform: {result['platform']}")
print(f" CPU: {result['cpu_info']}")
# Use the formatter directly for custom output
formatter = AttestationReportFormatter()
html_report = formatter.format_html(report)
with open("custom_report.html", "w") as f:
f.write(html_report)
Verification Features:
- ✅ ECDSA P-384 signature validation
- ✅ Nonce binding verification
- ✅ TCB (Trusted Computing Base) parsing
- ✅ CPU identification
- ✅ Color-coded hex dumps
- ✅ HTML/PDF report generation
- ✅ Platform detection (AMD SEV-SNP, Intel TDX)
📖 Documentation
Admin API Operations
# Model discovery
models = client.admin.list_models()
online_models = client.admin.list_online_models()
model_info = client.admin.get_model_info(model_id)
# Model encryption (paths auto-normalized)
job_id = client.admin.encrypt_model(
model_name_or_path="model-name", # Auto-prefixed with /models/upload/
out_encrypted_model_path="model-name-encrypted", # Auto-prefixed with /models/download/
encrypted_model_id="model-name-encrypted", # Custom model ID
encryption_impl="decoder-only-llm",
dtype="bfloat16",
server_ip="fhenom_ai_server",
server_port=9100
)
# Dataset encryption (paths auto-normalized)
dataset_job = client.admin.encrypt_dataset(
encrypted_model_id="my-encrypted-model",
dataset_name_or_path="my-dataset", # Auto-prefixed with /models/upload/
out_encrypted_dataset_path="my-dataset-encrypted", # Auto-prefixed with /models/download/
dataset_encryption_impl="numeric",
text_fields=["text"],
server_ip="fhenom_ai_server",
server_port=9100
)
# Serving control
client.admin.start_serving(
encrypted_model_id=model_id,
server_url="http://YOUR_VLLM_SERVER_IP:8000", # vLLM server IP/hostname
api_key=None, # Optional
display_model_name="my-model" # Optional: custom name for vLLM
)
client.admin.stop_serving(model_id)
# Job management
status = client.admin.get_job_status(job_id)
result = client.admin.wait_for_job(
job_id,
poll_interval=5,
timeout=3600,
callback=lambda s: print(f"Progress: {s.get('progress', 0)*100:.1f}%")
)
SFTP Operations
# Get SFTP manager
sftp = client.get_sftp_manager()
# Upload model (upload/ prefix added automatically)
sftp.upload_directory(
local_path="./llama-3-8b",
remote_path="llama-3-8b" # Becomes upload/llama-3-8b
)
# Download encrypted model (download/ prefix added automatically)
sftp.download_directory(
remote_path="llama-3-8b-encrypted", # Becomes download/llama-3-8b-encrypted
local_path="./encrypted/llama-3-8b"
)
# List files in upload directory
files = sftp.list_upload_directory()
for file in files:
print(f"{file.name}: {file.size_mb:.2f} MB")
# Clear download directory
sftp.clear_download_directory()
# Get directory size
size_gb = sftp.get_directory_size("upload")
print(f"Upload directory: {size_gb:.2f} GB")
# Check if file exists (via Admin API's SFTP manager)
exists = client.admin.sftp.file_exists("upload/my-model/config.json")
Health & Testing
# Test connectivity (via CLI)
# fhenomai health check
# fhenomai test connection
# In Python - test admin API
try:
models = client.admin.list_models()
print(f"✓ Admin API connected ({len(models)} models)")
except Exception as e:
print(f"✗ Admin API failed: {e}")
# Test SFTP connection
try:
sftp = client.get_sftp_manager()
files = sftp.list_upload_directory()
print(f"✓ SFTP connected ({len(files)} files in upload/)")
except Exception as e:
print(f"✗ SFTP failed: {e}")
User Inference (via OpenAI SDK)
For inference, use the standard OpenAI Python SDK:
from openai import OpenAI
# Connect to FHEnom User API (port 9999)
client = OpenAI(
base_url="http://your-tee-ip:9999/v1",
api_key="not-needed" # TEE doesn't require API key
)
# Standard OpenAI-compatible inference
response = client.chat.completions.create(
model="your-model-name",
messages=[
{"role": "user", "content": "Explain quantum computing"}
],
max_tokens=200
)
print(response.choices[0].message.content)
🛠️ Advanced Usage
Context Manager Usage
from fhenomai import FHEnomClient, FHEnomConfig
# Load config
config = FHEnomConfig.from_file()
# Context manager handles connection lifecycle
with FHEnomClient(config) as client:
# SFTP connection auto-managed
sftp = client.get_sftp_manager()
# Upload model (upload/ prefix added automatically)
sftp.upload_directory("./model", "model")
# Encrypt (paths auto-normalized)
job_id = client.admin.encrypt_model(
model_name_or_path="model",
out_encrypted_model_path="model-enc",
encrypted_model_id="model-enc"
)
# Wait for completion
result = client.admin.wait_for_job(job_id)
if result.get('status') == 'done':
# Download encrypted model (download/ prefix added automatically)
sftp.download_directory(
"model-enc",
"./encrypted/model"
)
# Connection automatically closed
Job Monitoring with Callbacks
import time
# Encrypt with progress callback (paths auto-normalized)
job_id = client.admin.encrypt_model(
model_name_or_path="large-model",
out_encrypted_model_path="large-model-enc",
encrypted_model_id="large-model-enc"
)
# Define callback for progress updates
def progress_callback(status):
progress = status.get('progress', 0) * 100
message = status.get('message', 'Processing')
print(f"\r{message}: {progress:.1f}%", end='', flush=True)
# Wait with callback
result = client.admin.wait_for_job(
job_id,
timeout=3600,
poll_interval=5,
callback=progress_callback
)
print(f"\nCompleted: {result.get('status')}")
## 📋 Configuration
### Configuration File
Create `~/.fhenomai/config.yaml`:
```yaml
# Admin API Configuration
admin:
host: "your-tee-ip"
port: 9099
url: "http://your-tee-ip:9099" # Alternative to host+port
# User API Configuration (for inference)
user:
host: "your-tee-ip"
port: 9999
url: "http://your-tee-ip:9999/v1" # Alternative to host+port
# SFTP Configuration
sftp:
host: "your-tee-ip"
port: 22
username: "admin"
password: "your-password" # Or use key_path
# key_path: "~/.ssh/id_rsa" # Alternative to password
base_path: "/var/lib/fhenomai/FHEnomAI-server/admin" # Optional
# Optional settings
timeout: 30
max_retries: 3
verify_ssl: true
auth_token: "default-auth-token-2026" # X-Auth-Token header
Environment Variables
export FHENOM_ADMIN_HOST="your-tee-ip"
export FHENOM_ADMIN_PORT="9099"
export FHENOM_SFTP_HOST="your-tee-ip"
export FHENOM_SFTP_USERNAME="admin"
export FHENOM_SFTP_PASSWORD="your-password"
Then use without parameters:
from fhenomai import FHEnomClient, FHEnomConfig
# Load from environment
config = FHEnomConfig.from_env()
client = FHEnomClient(config)
# Or load from file
config = FHEnomConfig.from_file() # Reads ~/.fhenomai/config.yaml
client = FHEnomClient(config)
🔧 API Reference
FHEnomClient
Main client class for FHEnom AI operations.
Key Methods:
admin- Access AdminAPI instance for model/serving operationsget_sftp_manager()- Get SFTPManager for file operations- Context manager support with
__enter__and__exit__
AdminAPI
Admin operations (accessible via client.admin):
Model Operations:
list_models()- List all encrypted modelslist_online_models()- List currently served modelsget_model_info(model_id)- Get model detailsencrypt_model(...)- Encrypt a plaintext modelencrypt_dataset(...)- Encrypt a dataset
Serving Operations:
start_serving(encrypted_model_id, server_url, ...)- Start servingstop_serving(encrypted_model_id)- Stop serving
Job Operations:
get_job_status(job_id)- Check job statuswait_for_job(job_id, timeout, callback)- Wait for completion
SFTP Operations (via admin.sftp):
- Access to SFTPManager for TEE directory operations
SFTPManager
High-level SFTP operations (accessible via client.get_sftp_manager() or client.admin.sftp):
Directory Operations:
upload_directory(local_path, remote_path)- Upload directorydownload_directory(remote_path, local_path)- Download directorylist_upload_directory()- List files in upload/list_download_directory()- List files in download/clear_upload_directory()- Clear upload directoryclear_download_directory()- Clear download directory
File Operations:
upload_file(local_file, remote_file)- Upload single filedownload_file(remote_file, local_file)- Download single filefile_exists(remote_path)- Check if file existsget_directory_size(directory)- Get size in GB
🤝 Contributing
Contributions are welcome! Please contact DataKrypto for contribution guidelines.
📄 License
This project is licensed under the MIT License - see LICENSE file.
🔗 Links
- Repository: Azure DevOps
- Documentation: https://docs.datakrypto.ai
- Website: https://datakrypto.ai
- LinkedIn: DataKrypto
- Support: support@datakrypto.ai
📞 Contact
DataKrypto
United States
533 Airport Blvd. Ste 400
Burlingame, CA 94010
+1 (650) 373-2083
Italy
Via Marche, 54
00187 Rome - Italy
+39 (06) 88923849
© 2026 DataKrypto. All rights reserved.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fhenomai-1.0.18.tar.gz.
File metadata
- Download URL: fhenomai-1.0.18.tar.gz
- Upload date:
- Size: 69.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d8894da96f39ec7f5529f4ca7511f2bd31edb7a86e5dc662c4d78a9e6c1379df
|
|
| MD5 |
9064352b482121d90d6182373c52622e
|
|
| BLAKE2b-256 |
6386ad65307ca0fc2e0327daf555618097e32cbb6eb6418f3000a1b772cebdde
|
File details
Details for the file fhenomai-1.0.18-py3-none-any.whl.
File metadata
- Download URL: fhenomai-1.0.18-py3-none-any.whl
- Upload date:
- Size: 63.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b323ef7144c0dd4cd16dbb1f74dc3ec5d7a16d859a1b54ff387c6528c18d1708
|
|
| MD5 |
bce252d281de7af45fd94b1ef608a783
|
|
| BLAKE2b-256 |
198ea4300dc266a8ad69405ca027ef7170ffb6149566c901059ac3af5ae4891f
|