Federated Learning and Fully Homomorphic Encryption
Project description
Cifer is a Federated Learning framework with integrated Fully Homomorphic Encryption (FHE) for secure, decentralized model training and encrypted aggregation.
It improves model robustness, reduces bias, and handles distribution shift across non-IID data.
Supports both centralized and decentralized topologies by default, with optional Cifer Blockchain integration for auditability and provenance.
🌎 Website | 📔 Docs | 🙌 Join Slack
Cifer Python Package (PyPI)
The cifer Python package provides a secure, programmatic interface for executing Privacy-Preserving Machine Learning (PPML) workflows. It enables local and distributed model training using Federated Learning (FL) and Fully Homomorphic Encryption (FHE)—without ever exposing raw data.
This package is ideal for Python developers, researchers, and data scientists who need fine-grained control over federated workflows within trusted or adversarial environments.
For alternative development workflows:
- Use the Cifer Python Package for direct integration into custom Python-based ML pipelines
- Use Cifer Workspace for browser-based, no-code orchestration and collaborative workspace
What is Cifer Federated Learning?
Cifer Federated Learning (FedLearn) is a secure training framework that enables collaborative machine learning across distributed data sources—without ever sharing raw data. Each participant (or node) performs local training, and only encrypted model updates are exchanged across the network.
Rather than centralizing data into a vulnerable repository, Cifer coordinates encrypted computations between participants, preserving data sovereignty, compliance, and confidentiality across jurisdictions and organizations.
Key Extensions Beyond Standard FL
-
Fully Homomorphic Encryption (FHE)
Cifer integrates FHE at the protocol level, allowing model updates and gradients to be computed on encrypted tensors. This ensures data remains encrypted throughout the lifecycle—including training, aggregation, and communication.
Unlike differential privacy (DP), which introduces noise and cannot fully prevent reconstruction attacks, FHE offers cryptographic guarantees against adversarial inference—even in hostile environments. -
Dual Topology Support: Centralized and Decentralized
Cifer supports both:- Client–Server (cFL): A central coordinator aggregates updates from authenticated participants—ideal for trusted, enterprise-level deployments.
- Peer-to-Peer (dFL): Participants can operate without a central aggregator, enabling direct encrypted update exchanges across nodes for higher resilience.
-
Secure Communication Channels
All communication is conducted over gRPC, leveraging HTTP/2 and Protocol Buffers for efficient, multiplexed, and encrypted transport. This ensures fast synchronization while minimizing attack surfaces. -
Blockchain Integration (Optional)
For use cases requiring immutable audit trails, decentralized identity, or consensus-based coordination, Cifer supports integration with its proprietary Cifer Blockchain Network, providing an additional layer of provenance and tamper resistance.
Federated Learning and the Adversarial Threat Model
Standard federated learning protocols are susceptible to:
- Gradient leakage and model inversion attacks
- Malicious participant injection
- Data reconstruction through side-channel inference
The industry trend has been to use differential privacy (DP) to mitigate these threats. However:
- DP requires complex tuning of privacy budgets (ε, δ)
- It introduces statistical noise, reducing model accuracy
- It provides probabilistic—not cryptographic—guarantees, and can still leak information under repeated queries or cumulative exposure
Cifer’s FHE-based design eliminates these risks by ensuring that all shared model artifacts remain mathematically unreadable, even under active attack or node compromise.
Performance Capacity
Cifer FedLearn is built for real-world scale:
- Supports client-server and P2P topologies
- Tested for model sizes and parameter transfers up to 30GB
- Optimized for GPU acceleration, NUMA-aware compute, and multi-node orchestration
Core Modules
- FedLearn
Orchestrates decentralized training across multiple nodes while maintaining data locality. Supports both:- Centralized FL (cFL) for governed, trusted environments
- Decentralized FL (dFL) with peer coordination across encrypted channels
- HomoCryption (FHE)
Allows computation on encrypted data throughout the training lifecycle, preserving privacy even during intermediate operations.
Key Capabilities
- Hybrid Federation Support
Choose between cFL or dFL architectures depending on governance, trust, and fault tolerance requirements. - Secure Communication Protocol
Powered by gRPC with HTTP/2 and Protocol Buffers:- Low-latency streaming
- Compact serialized messages
- Built-in encryption and authentication
- End-to-End Encrypted Computation
FHE is embedded directly into the training workflow. No intermediate decryption. Data privacy is mathematically guaranteed.
Before Getting Started
To ensure a smooth experience using Cifer for Federated Learning (FL) and Fully Homomorphic Encryption (FHE), please verify your system meets the following baseline requirements:
System Requirements
- Operating System
- Linux (Ubuntu 18.04 or later)
- macOS (10.14 or later)
- Windows 10 or later
- Python
- Version: 3.9 (only version officially supported)
- Memory
- Minimum: 8 GB RAM
- Recommended: 16 GB+ for large-scale training or encryption tasks
- Storage
- At least 30 GB of available disk space
- Network
- Stable internet connection (required for remote collaboration or coordination modes)
GPU Acceleration (Optional)
Cifer supports GPU acceleration for both FL and FHE components using:
- NVIDIA CUDA (for TensorFlow, PyTorch pipelines)
- Google TPU (via JAX and compatible backends)
While GPU is not mandatory, it is highly recommended for encrypted training at scale or production-grade deployments.
Getting Started with Cifer’s Federated Learning
Cifer provides a modular Federated Learning (FL) framework that enables privacy-preserving model training across distributed environments. To get started, install the package via pip, import the required modules, and choose your preferred communication method for orchestration.
What's Included in pip install cifer
Installing Cifer via pip provides the following components and features:
Core Modules
- FedLearn: Federated learning engine for decentralized model training.
- HomoCryption: Fully Homomorphic Encryption (FHE) for computation on encrypted data.
Integrations
- Built-in compatibility with TensorFlow, PyTorch, scikit-learn, NumPy, CUDA, JAX, Hugging Face Transformers.
Utilities
- Data preprocessing tools
- Privacy-preserving metrics
- Secure aggregation algorithms
Cryptographic Libraries
- Integration with advanced homomorphic encryption backends
Communication Layer
- gRPC-based secure communication protocols for FL orchestration
Command-Line Interface (CLI)
- CLI client for managing experiments and configurations
Example Notebooks
- Jupyter notebooks demonstrating end-to-end workflows
Optional Dependencies
Install extras using:
pip install cifer[extra]
Options:
- viz: Visualization tools
- gpu: GPU acceleration support
- all: Installs all optional dependencies
1. Install Cifer
pip install cifer
To include all optional features:
pip install cifer[all]
2. Import Required Modules
from cifer import fedlearn as fl
3. Choose a Communication Method
Cifer supports two communication modes for FL orchestration:
Method A: gRPC Client with JSON Configuration
Connect to a remote Cifer server via gRPC using a JSON config file.
- Best for: Quick onboarding, lightweight setup, connecting to Cifer Workspace backend.
- Setup: Supply a JSON file with encoded credentials and project parameters.
- No server deployment required—ideal for minimal infrastructure environments.
{
base_api="http://localhost:5000",
"port": 8765,
"n_round_clients": 2,
"use_homomorphic": true,
"use_secure": true,
"certfile": "certificate.pem",
"keyfile": "private_key.pem"
}
Method B: Self-Hosted WebSocket Server
Deploy your own FL coordination server using WebSocket (WSS).
- Best for: On-premise, private environments or regulated sectors.
- Setup: Launch your own server and connect clients within the same secure channel.
{
"ip_address": "wss://0.0.0.0",
"port": 8765,
"n_round_clients": 2,
"use_homomorphic": true,
"use_secure": true,
"certfile": "certificate.pem",
"keyfile": "private_key.pem"
}
4. Define Your Dataset and Base Model
This is where users prepare their local training environment before initiating any FL round.
Define Dataset
You must prepare and point to a local dataset for training. Cifer expects standardized input for consistency across participants.
- Supported formats: NumPy (.npy, .npz), CSV, or TFRecords
- Recommended: Preprocess and normalize data before training
dataset_path = "YOUR_DATASET_PATH"
Define Base Model
You can define your ML model in three ways: using a local file, cloning from GitHub, or downloading from Hugging Face.
Option 1: Create Model Locally
model_path = "YOUR_MODEL_PATH"
Option 2: Load Model from GitHub
Clone the model repository and point to the .h5 or .pt file.
git clone https://github.com/example/model-repo.git models/
Then specify:
model_path = "models/your_model.h5"
Option 3: Load Pretrained Model from Hugging Face
Install transformers if needed:
pip install transformers
Download and configure:
from transformers import AutoModel
model_path = "models/huggingface_model"
model = AutoModel.from_pretrained("bert-base-uncased")
model.save_pretrained(model_path)
Then reference:
model_path = "models/huggingface_model"
5. Start the Training Process
Once your dataset and base model are defined, you can initialize the federated learning process. Cifer supports both server (Fed Master) and client (Contributor) roles depending on your deployment mode.
Method A: gRPC Client with JSON Configuration
This method is ideal if you're using Cifer's hosted infrastructure (via Cifer Workspace) and want to avoid setting up your own server.
1. Prepare JSON Configuration
Create a config.json file with the following structure:
{
"ip_address": "https://localhost",
"port": 5000,
"n_round_clients": 2,
"use_homomorphic": true,
"use_secure": true,
"certfile": "certificate.pem",
"keyfile": "private_key.pem"
}
2. Start Training
from cifer import CiferClient
client = CiferClient(config_path="config.json")
client.run()
This connects the client to Cifer’s gRPC backend, performs local training, and submits encrypted model updates.
Method B: Self-Hosted WebSocket Server
Use this method if you want full control over orchestration and deployment, or if you need to run the system entirely on-premise.
Server: Launch Aggregation Coordinator
from cifer import fedlearn as fl
server = fl.Server()
strategy = fl.strategy.FedAvg(
data_path="dataset/mnist.npy",
model_path="model/mnist_model.h5"
)
server.run(strategy)
Client: Start Local Training
from cifer import CiferClient
client = CiferClient(
encoded_project_id="YOUR_PROJECT_ID",
encoded_company_id="YOUR_COMPANY_ID",
encoded_client_id="YOUR_CLIENT_ID",
base_api="wss://yourserver.com",
dataset_path="dataset/mnist.npy",
model_path="model/mnist_model.h5"
)
client.run()
⚠️ Ensure that your WebSocket server is reachable via a wss:// secure connection.
Both methods will iteratively perform local training, encrypted aggregation, and global model updates across multiple rounds.
6. Aggregation Process
Federated Aggregation is the core of Cifer’s coordination loop—where encrypted model updates from clients are securely combined into a global model.
Method A: gRPC with JSON Configuration (Cifer Workspace)
When using the gRPC method (via CiferClient), aggregation is automatically handled by Cifer's managed infrastructure.
- No manual aggregation code is required.
- After each client sends its local update, the server:
- Decrypts (if FHE is enabled)
- Aggregates using the selected strategy (e.g., FedAvg, FedSGD)
- Sends back the updated model to each client
Best for teams who want rapid onboarding with minimal infra overhead.
To customize strategy: Contact the Cifer Workspace team to enable custom orchestration logic.
Method B: Self-Hosted WebSocket Server
When running your own aggregation server, you have full control over the aggregation algorithm.
Example using FedAvg:
from cifer import fedlearn as fl
server = fl.Server()
strategy = fl.strategy.FedAvg(
data_path="dataset/mnist.npy",
model_path="model/mnist_model.h5"
)
server.run(strategy)
You may substitute FedAvg with other strategies (e.g., FedProx, FedYogi) or define your own:
class CustomAggregation(fl.strategy.BaseStrategy):
def aggregate(self, updates):
# implement custom logic
return aggregated_model
strategy = CustomAggregation()
server.run(strategy)
- If FHE is enabled, aggregation will happen on encrypted tensors. Ensure your aggregation strategy supports homomorphic operations (e.g., addition, averaging).
- Full FHE documentation is provided in a later section.
Monitoring Aggregation
For both methods:
- Aggregation rounds run until the configured number of epochs or convergence is reached.
- Logs for each round (loss, accuracy, gradient stats) are available via CLI or Jupyter Notebook (in CLI or Workspace mode).
- You can visualize training progress using Cifer’s optional viz module:
pip install cifer[viz]
Aggregation with WebSocket Server (Method B)
If you're running a self-hosted WebSocket server (wss://), aggregation must be explicitly defined and handled in your orchestration logic.
from cifer import fedlearn as fl
server = fl.Server()
strategy = fl.strategy.FedAvg(
data_path="/path/to/data",
model_path="/path/to/model"
# Optionally: encryption=True if using FHE
)
server.run(strategy)
FedAvg is the default aggregation strategy. You may replace it with any supported method (e.g., FedProx, SecureFed).
When using FHE, make sure your aggregation method only uses additive-compatible operations.
This method gives full control over server lifecycle, custom hooks, and logging.
Getting Started with Cifer’s Homomorphic Encryption (FHE)
Cifer includes a built-in homocryption module for Fully Homomorphic Encryption (FHE), allowing computation on encrypted tensors without exposing raw data. You can encrypt, perform arithmetic, relinearize, and decrypt—all while preserving confidentiality.
1. Import HomoCryption Module
from cifer.securetrain import (
generate_named_keys,
encrypt_dataset,
train_model,
decrypt_model,
)
2. Generate Keys
def generate_named_keys(key_name):
print(f"🔐 Generating public/private key pair for: {key_name}")
pubkey, privkey = paillier.generate_paillier_keypair()
dir_path = f"keys/{key_name}"
os.makedirs(dir_path, exist_ok=True)
with open(os.path.join(dir_path, "public.key"), "wb") as f:
pickle.dump(pubkey, f)
with open(os.path.join(dir_path, "private.key"), "wb") as f:
pickle.dump(privkey, f)
print(f"✅ Keys saved to: {dir_path}/public.key, {dir_path}/private.key")
return pubkey, privkey
3. Encrypt Data
pubkey, _ = generate_named_keys(key_name)
print("🔐 Encrypting dataset...")
enc_df = df.copy()
for col in enc_df.columns:
enc_df[col] = enc_df[col].apply(lambda x: pubkey.encrypt(x))
os.makedirs(os.path.dirname(output_path), exist_ok=True)
print(f"💾 Saving encrypted dataset to: {output_path}")
4. Perform Encrypted Computation
Example: Add two encrypted values
privkey = load_private_key(key_name)
try:
X_plain = np.array([[privkey.decrypt(val) for val in row] for row in X_enc])
y_plain = np.array([privkey.decrypt(val) for val in y_enc])
except Exception as e:
print(f"❌ Failed to decrypt: {e}")
return
print("✅ Label distribution:", np.unique(y_plain, return_counts=True))
if len(np.unique(y_plain)) < 2:
print("❌ Need at least 2 classes in the dataset for training.")
return
print("🧠 Training model using decrypted values...")
clf = LogisticRegression()
clf.fit(X_plain, y_plain)
print(f"💾 Saving trained model to: {output_model_path}")
Apply relinearization to manage ciphertext noise:
# Encrypt two vectors
vec1 = ts.ckks_vector(context, [1.0, 2.0, 3.0])
vec2 = ts.ckks_vector(context, [4.0, 5.0, 6.0])
# Multiply and relinearize
encrypted_result = vec1 * vec2
encrypted_result.relinearize() # 👈 This is the relinearize step
decrypted = encrypted_result.decrypt()
5. Decrypt Result
with open(encrypted_path, "rb") as f:
enc_df = pickle.load(f)
print("🔄 Extracting features and labels...")
try:
X_enc = enc_df[feature_cols].values.tolist()
y_enc = enc_df[label_col].values.tolist()
except KeyError as e:
print(f"❌ Column error: {e}")
return
print(f"📂 Loading private key to decrypt data for training: {key_name}")
privkey = load_private_key(key_name)
try:
X_plain = np.array([[privkey.decrypt(val) for val in row] for row in X_enc])
y_plain = np.array([privkey.decrypt(val) for val in y_enc])
except Exception as e:
print(f"❌ Failed to decrypt: {e}")
return
print("✅ Label distribution:", np.unique(y_plain, return_counts=True))
if len(np.unique(y_plain)) < 2:
print("❌ Need at least 2 classes in the dataset for training.")
return
print("🧠 Training model using decrypted values...")
clf = LogisticRegression()
clf.fit(X_plain, y_plain)
print(f"💾 Saving trained model to: {output_model_path}")
| Operation | Method | Compatible with Aggregation |
|---|---|---|
| Addition | hc.add() |
✅ Yes |
| Multiplication | hc.mul() |
⚠️ Partially (check noise) |
| Relinearize | hc.relinearize() |
✅ Required after mul() |
| Decryption | hc.decrypt() |
🔐 Private key required |
FHE in Aggregation Context
When using FHE-enabled federated learning:
- Each client encrypts model weights before sending
- The server performs aggregation (e.g., summing encrypted tensors)
- Final decryption happens at a trusted node after aggregation
- Only compatible operations (addition, averaging) are supported
⚠️ If FHE is enabled, make sure your aggregation strategy supports encrypted arithmetic.
Learn More
For detailed examples, deployment patterns, and advanced configurations:
- Full documentation: https://www.cifer.ai/docs
- GitHub repository: https://github.com/ciferai/cifer
- Developer support: support@cifer.ai
Changelog
[1.1.0] – 2026-02-12
Added • Introduced Hybrid SaaS Agent (ACE v1.1) for secure cloud-controlled local execution. • Added /run_notebook, /status/{execution_id}, /executions, and /health endpoints. • Implemented per-project workspace isolation.
Security • Added optional Bearer token auth (CIFER_AGENT_TOKEN). • Added CORS and notebook domain allowlist. • Enforced .ipynb validation, timeout, and file size limits.
Improved • Background notebook execution with status tracking. • Decoupled agent from TensorFlow dependency for lightweight deployment.
[1.0.30] – 2026-01-29
Improved
• Improved package modularity by separating core dependencies from optional domain-specific features using extras_require.
• Reduced default installation footprint to support lightweight server-only and production deployments.
• Enhanced server readiness and operational stability in preparation for preflight checks and improved network resilience.
• Improved overall production usability and maintainability without introducing breaking changes or modifying existing aggregation logic.
Notes
• This release focuses on packaging improvements and production hardening.
• Existing users can upgrade safely without changing their current workflows.
• Optional features can now be installed selectively via extras (e.g. cifer[vision], cifer[audio], cifer[all]).
[1.0.29.1] – 2026-01-29
Improved
• Added optional dry_run mode to the PPML server to allow aggregation and validation without uploading aggregated models.
• Introduced aggregation summary metadata (last_aggregation_summary) to support auditing, debugging, and future monitoring dashboards.
• Enhanced server observability with structured logging and execution-time metrics while preserving existing CLI output behavior.
• Improved internal extensibility by isolating additive capabilities without modifying existing aggregation or upload logic.
[1.0.29] – 2026-01-29
Improved
• Added structured logging and execution-time metrics to the PPML server for better observability and performance monitoring.
• Enhanced server reliability and audit readiness without changing existing aggregation logic or client behavior.
[1.0.28] – 2026-01-19
Improved
• Improved PPML server aggregation performance by eliminating unnecessary disk I/O and loading client models directly from memory during aggregation.
• Fixed server-side FedAvg invocation to correctly aggregate multiple client models without nested list errors.
• Added strict server-side validation for model layer count and weight shape compatibility to prevent silent aggregation failures.
• Enhanced robustness of server execution by failing early on invalid or incompatible client models.
• Reduced server–client coupling by isolating server functionality from client-side dependencies, enabling independent server execution and testing.
[1.0.27] - 2026-01-11
Improved
• Added structured logging across the client lifecycle to improve observability, debugging, and audit readiness.
• Introduced dataset validation to ensure required training data keys and shape consistency before training.
• Enhanced error reporting for dataset loading, model training, aggregation, and API communication without altering existing logic.
• Improved runtime stability by failing early on invalid datasets, model shape mismatches, and missing resources.
• Strengthened production readiness while preserving backward compatibility and experimental feature support.
[1.0.26] - 2026-01-05
Improved
• Improved server-side network performance by reusing HTTP connections with `requests.Session`.
• Added configurable request timeouts to prevent stalled API calls during federated aggregation.
• Enhanced API error handling by failing fast on HTTP errors for more predictable server behavior.
• Increased stability and reliability of server-to-API communication without changing aggregation logic.
[1.0.25] - 2025-12-15
Fixed
• Fixed issues caused by emojis and decorative icons in CLI outputs that could break plain-text or non-Unicode environments.
• Resolved inconsistencies in CLI help texts and status messages.
Improved
• Improved CLI output performance by simplifying message formatting.
• Enhanced clarity and reliability of error handling and status reporting.
• Reduced unnecessary output complexity to make CLI responses faster and more predictable.
[1.0.24] - 2025-09-21
Fixed
- Removed all emojis/icons from CLI help texts and outputs to ensure compatibility with plain-text environments.
- Simplified output messages for clearer error handling and status reporting.
Improved
- Refactored CLI code for better readability by organizing commands into clear sections (Securetrain, Kernel, Agent, Notebook, Sync, Training Simulation).
- Added structured comments and standardized indentation/spacing.
- Renamed variables (e.g.,
r→responseinrequests.get) for clarity.
[1.0.23] - 2025-08-09
Fixed
- Resolved build and package verification issues by ensuring
setuptools,wheel, andtwineare properly installed within a virtual environment (venv). - Addressed installation restrictions on macOS caused by the
externally-managed-environmentlimitation.
Improved
- Enhanced documentation and workflow for publishing the package to PyPI.
- Improved pre-upload validation by integrating
twine checkto prevent errors before release.
[1.0.22] - 2025-08-06
Improved
- Refactored
FederatedServercodebase to support dual communication protocols (WebSocket + gRPC) for more flexible federated learning setups. - Enhanced CLI experience: users can now run
securetraincommands directly (e.g.,cifer securetrain train) without needing to call Python functions manually after installing viapip install cifer.
Fixed
- General bug fixes and performance improvements.
[1.0.16]-[1.0.17] - 2025-06-01
Fixed
- Resolved ASGI app load error by specifying correct module path:
cifer.agent_ace:app.
Changed
- Updated
uvicorn.run()inrun_agent_ace()to use proper module path for FastAPI app loading.
Added
- Verified kernel registration for
cifer-kernel. - Fu
[1.0.15] - 2025-05-31
✅ [Improved] FastAPI Migration
- Migrated from Flask to FastAPI for the
/run_notebookagent endpoint. - Enhanced performance and scalability using
uvicornASGI server. - Full CORS middleware support added via FastAPI's built-in capabilities.
- Swagger/OpenAPI docs now available at
/docs.
✅ [Fixed] Python Compatibility & Kernel Registration
- Improved
ensure_kernel_registered()logic to use the currentsys.executablePython version. - Fixed Python version enforcement in
setup.py(python_requires=">=3.9"). - Added compatibility checks for Jupyter kernel auto-registration.
- Improved fallback behavior if
notebookappfails to resolve current Jupyter directory.
✅ [New] Dependencies and PyPI Metadata
- Added missing dependencies:
fastapi,scikit-learn,joblib,phe. - Validated compatibility with Python 3.10 and 3.11.
- Updated
setup.pyto support PyPI publishing with long description and entry point.
[1.0.14] - 2025-05-30
✅ [New] Cifer CLI Agent & Kernel Integration
- Added
ciferCLI with subcommands:agent-ace– Run Flask server to download & execute Jupyter Notebooksregister-kernel– Automatically register Jupyter kernel for current Conda environmentdownload-notebook,sync, andtrain– Utility commands for notebook management and testing
- Introduced auto-registration for 🧠 Cifer AI Kernel (
cifer-kernel) on all CLI usage - Executed notebooks are now forced to run using the
cifer-kernelfor consistent environment behavior - Flask agent
/run_notebookendpoint downloads, executes, and opens notebooks inside Jupyter
[1.0.13] - 2025-05-10
✅ [New] Homomorphic Encryption (HE) Support
- Added
use_encryption=Trueflag in bothCiferClientandCiferServer - Integrated
Paillierencryption using thephelibrary to secure model weights - Client now generates a keypair (
public_key,private_key) and encrypts weights before upload - Encrypted model weights are uploaded via the new
/upload_encrypted_modelAPI
✅ [New] Server-Side Encrypted Model Aggregation
- Added
fetch_encrypted_models()to retrieve encrypted weights from clients - Implemented
encrypted_fed_avg()to perform homomorphic FedAvg without decrypting - Encrypted aggregation output is saved as
aggregated_encrypted_weights.pklfor client-side decryption
✅ [New] PHP/CodeIgniter API Enhancements
- Added new API endpoint:
get_encrypted_client_models($project_id)to fetch encrypted models only - Validates and stores encrypted models in the
model_updatestable - Automatically updates the project status to "Testing in Progress" when a model is uploaded
✅ [Fixes] Server Run Script Improvements
- Automatically creates
model_pathanddataset_pathif not present - Added
USE_ENCRYPTIONflag in the run script to easily toggle encryption mode
⚙️ Dependencies
phe>=1.5.0for Paillier homomorphic encryptiontensorflow>=2.0,numpy>=1.19
[1.0.8] - 2025-04-11
Added
- ✨ Integrated
flask-corsto support browser-based communication with the local Agent - 🌐 Added support for launching Jupyter notebooks via either
localhostor a remoteopen_url - 📦 Included JavaScript client snippet for calling the agent directly from a web page
- 🧪 Added support for Homomorphic Encryption workflows in the agent-client pipeline
Improved
- 🧠 Refactored agent logic to dynamically handle notebook URLs and browser launch targets
- 🔐 Enhanced agent's compatibility with encrypted notebook execution scenarios using homomorphic encryption
- 📁 Improved compatibility with both local Jupyter and server-proxied environments (e.g.,
/notebookonworkspace.cifer.ai)
Fixed
- ✅ Corrected hardcoded browser path (
/notebooks/notebooks/filename) to proper rendering path
[1.0.6] - 2025-03-23
Fixed
- 🛠️ Resolved bug in data processing related to incorrect input handling.
- ✅ Improved error handling for missing or corrupted dataset files.
- ⚡ Optimized model loading process to prevent
AttributeErrorinCiferClient. - 🔐 Fixed issue where encrypted parameters were not being properly decrypted:
[1.0.4] - 2025-03-17
Fixed
- 🛠️ Resolved bug in data processing related to incorrect input handling.
- ✅ Improved error handling for missing or corrupted dataset files.
- ⚡ Optimized model loading process to prevent
AttributeErrorinCiferClient.
[1.0.3] - 2025-03-11
Fixed
- Resolved bug in data processing related to incorrect input handling.
- Added WebSocket connectivity improvements to enhance stability and performance.
[1.0.2] - 2025-03-09
Fixed
- Resolved bug in data processing related to incorrect input handling.
[1.0.1] - 2025-03-07
Added
- Initial release of
cifer - Implements Homomorphic Encryption (LightPHE)
- API Server integration with Flask and Uvicorn
[0.1.26] - 2024-10-28
Added
- Websocket server-client
- PyJWT
Fixed
- Resolved bug in data processing related to incorrect input handling.
[0.1.26] - 2024-10-28
Added
- Added support for WebSocket Secure (WSS), allowing users to choose between standard WebSocket (WS) or secure WSS communication.
- Enabled model weight encryption using Homomorphic Encryption (RSA) for secure data transmission between Client and Server. This can be enabled with the use_homomorphic parameter.
- Added JSON Web Token (JWT) authentication, requiring Clients to send a token to the Server for identity verification, enhancing access control.
Fixed
- Resolved import issues by switching to absolute imports in connection_handler.py to reduce cross-package import conflicts when running the project externally.
[0.1.23] - 2024-10-22
Fixed
- Resolved bug in data processing related to incorrect input handling.
[0.1.22] - 2024-10-05
Fixed
- No matching distribution found for tensorflow
- Package versions have conflicting dependencies.
[0.1.19] - 2024-09-29
Added
- Add conditional TensorFlow installation based on platform
Fixed
- Resolved bug in data processing related to incorrect input handling.
[0.1.18] - 2024-09-29
Added
- Initial release of
FedServerclass that supports federated learning using gRPC. - Added client registration functionality with
clientRegister. - Added model training round management with
startServerfunction. - Implemented federated averaging (FedAvg) aggregation for model weights.
- Model validation functionality with
__callModelValidationmethod. - Support for handling multiple clients concurrently with threading.
- Configurable server via
config.json.
Changed
- Modularized the code for future extension and improvement.
- Created configuration options for server IP, port, and
max_receive_message_lengthvia theconfig.jsonfile.
Fixed
- Optimized client handling to prevent blocking during registration and learning rounds.
[0.1.15-0.1.17] - 2024-09-14
Fixed
- Resolved bug in data processing related to incorrect input handling.
[0.1.14] - 2024-09-013
Fixed
- Resolved bug in data processing related to incorrect input handling.
[0.1.13] - 2024-09-08
Added
-- Integrate Tensorflow and Huggingface's Transformer New Integration: Added support for TensorFlow and HuggingFace's Transformers library to enhance model training and expand compatibility with popular AI frameworks.
Fixed
-- Resolved various bugs to improve system stability and performance. This update continues to build on CiferAI's federated learning and fully homomorphic encryption (FHE) framework, focusing on enhanced compatibility, privacy, and security in decentralized machine learning environments.
[0.1.11] - 2024-09-08
Changed
- Homepage --- cifer.ai Documentation. --- cifer.ai/documentation Repository --- https://github.com/CiferAI/ciferai
[0.1.10] - 2024-09-08
Changed
- Updated
README.mdto improve content and information about Cifer.
[0.0.9] - 2024-09-01
Added
- Added new feature for handling exceptions in the main module.
- Included additional error logging functionality.
[0.0.8] - 2024-08-25
Fixed
- Resolved bug in data processing related to incorrect input handling.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cifer-1.0.31.tar.gz.
File metadata
- Download URL: cifer-1.0.31.tar.gz
- Upload date:
- Size: 56.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.18
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5eff7ea301370d5eaf5c49e2691d4170a0b559d4239944e69ff431cd777682e4
|
|
| MD5 |
2c43b81f56cd894f0243d618322e8b11
|
|
| BLAKE2b-256 |
53e51298959703cacf11dced6a3c50b52e4c02e216f4b4ecced8d9d1d37fbfe0
|
File details
Details for the file cifer-1.0.31-py3-none-any.whl.
File metadata
- Download URL: cifer-1.0.31-py3-none-any.whl
- Upload date:
- Size: 42.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.18
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c5a49c965ce2cf6de379845a23941fe6d41d172398806b94f9cc9fc1433f0fc4
|
|
| MD5 |
95882f5b73f8f32a6acf06e23c33d72c
|
|
| BLAKE2b-256 |
b351ca35eec6f13ff0201f51a326841271c30faf1a0235b1274507792a1c6974
|