Skip to main content

AIBT: Adversarial Information Bottleneck Training for Privacy-Preserving Federated Learning

Project description

AIBT: Adversarial Information Bottleneck Training

PyPI version Python 3.8+ PyTorch License: MIT

Privacy-preserving federated learning with information-theoretic privacy guarantees

Overview

AIBT (Adversarial Information Bottleneck Training) is a privacy-preserving federated learning framework that combines information bottleneck theory with adversarial training to achieve strong privacy guarantees while maintaining high model utility.

Key Features

  • 🔒 Privacy-Preserving: Combines Information Bottleneck (IB) with adversarial training
  • 🌐 Federated Learning: Distributed training with FedAvg aggregation
  • 🛡️ Attack Resistant: Defends against Membership and Attribute Inference attacks
  • 🚀 Easy to Use: Simple API for training and evaluation
  • 📊 Built-in Metrics: Privacy attack evaluation included

Installation

pip install aibt-fl

From Source

git clone https://github.com/aibt/aibt.git
cd aibt/aibt_package
pip install -e .

Quick Start

import torch
from aibt import AIBTFL, AIBTModel, create_aibt_model, evaluate_privacy

# Create an AIBT model for your data
model = create_aibt_model(
    input_dim=13,           # Number of input features
    num_classes=2,          # Number of output classes
    latent_dim=64,          # Latent space dimension
    num_sensitive_classes=2 # Number of sensitive attribute classes
)

# Initialize AIBT Federated Learning
aibt = AIBTFL(
    model=model,
    num_clients=10,
    device="cpu",
    learning_rate=0.001,
    lambda_kl=0.01,     # KL divergence weight (Information Bottleneck)
    lambda_adv=1.0,     # Adversarial loss weight
)

# Setup clients with their data
aibt.setup_clients(
    client_datasets=client_data,        # List of (X, y) tuples per client
    sensitive_data=client_sensitive     # Optional: sensitive attributes
)

# Train with federated learning
history = aibt.train(
    num_rounds=100,
    local_epochs=5,
    test_data=(X_test, y_test),
    verbose=True
)

# Evaluate privacy
privacy_metrics = evaluate_privacy(
    model=model,
    train_data=(X_train, y_train),
    test_data=(X_test, y_test),
    device="cpu"
)

print(f"Membership Inference AUC: {privacy_metrics['membership_auc']:.4f}")
print(f"Privacy preserved: {privacy_metrics['membership_auc'] < 0.55}")

Architecture

AIBT combines three key components:

Input → Encoder → Compressed Representation → Predictor → Output
                         ↓
                    Adversary (tries to infer sensitive info)
                         ↓
                  Gradient Reversal Layer

Loss Function:

L = L_task + λ₁ L_KL - λ₂ L_adv
  • L_task: Task-specific loss (e.g., cross-entropy)
  • L_KL: KL divergence for information bottleneck compression
  • L_adv: Adversarial loss for privacy (with gradient reversal)

API Reference

Core Classes

AIBTFL

Main federated learning class with AIBT training.

AIBTFL(
    model,                      # AIBTModel instance
    num_clients=10,             # Number of federated clients
    device="cpu",               # Device (cpu/cuda)
    learning_rate=0.001,        # Learning rate
    batch_size=32,              # Batch size
    lambda_kl=0.01,             # KL divergence weight
    lambda_adv=1.0,             # Adversarial loss weight
    lambda_grl=1.0,             # Gradient reversal strength
)

AIBTModel

Complete AIBT model with encoder, predictor, and adversary.

AIBTModel(
    encoder,            # Encoder network
    predictor,          # Task predictor
    adversary,          # Adversary network
    lambda_kl=0.01,     # KL weight
    lambda_adv=1.0,     # Adversarial weight
    lambda_grl=1.0,     # GRL strength
)

Model Components

  • GradientReversalLayer: Reverses gradients during backprop for adversarial training
  • VariationalEncoder: Information bottleneck encoder with reparameterization
  • MLPEncoder: MLP encoder for tabular data
  • Predictor: Task prediction head
  • Adversary: Sensitive attribute classifier

Privacy Metrics

from aibt import evaluate_privacy, evaluate_membership_inference, evaluate_attribute_inference

# Complete privacy evaluation
metrics = evaluate_privacy(model, train_data, test_data, sensitive_train, sensitive_test)

# Individual attacks
mia_metrics = evaluate_membership_inference(model, train_data, test_data)
aia_metrics = evaluate_attribute_inference(model, X, sensitive_attrs)

Hyperparameters

Parameter Default Description
lambda_kl 0.01 KL divergence weight (compression)
lambda_adv 1.0 Adversarial loss weight (privacy)
lambda_grl 1.0 Gradient reversal strength
latent_dim 128 Latent space dimension
learning_rate 0.001 Optimizer learning rate
batch_size 32 Training batch size

Hyperparameter Tuning

  • Higher lambda_kl: More compression, potentially lower accuracy
  • Higher lambda_adv: Stronger privacy, may affect utility
  • Recommended range: lambda_kl ∈ [0.005, 0.02], lambda_adv ∈ [0.5, 2.0]

Citation

If you use AIBT in your research, please cite:

@article{aibt2025,
  title={Adversarial Information Bottleneck Training for Privacy-Preserving Federated Learning},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2025}
}

References

  • Tishby et al., "The Information Bottleneck Method", Allerton 1999
  • Ganin & Lempitsky, "Domain-Adversarial Training of Neural Networks", JMLR 2016
  • McMahan et al., "Communication-Efficient Learning of Deep Networks from Decentralized Data", AISTATS 2017

License

MIT License - see LICENSE for details.

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aibt_fl-1.0.0.tar.gz (21.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aibt_fl-1.0.0-py3-none-any.whl (19.9 kB view details)

Uploaded Python 3

File details

Details for the file aibt_fl-1.0.0.tar.gz.

File metadata

  • Download URL: aibt_fl-1.0.0.tar.gz
  • Upload date:
  • Size: 21.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.3

File hashes

Hashes for aibt_fl-1.0.0.tar.gz
Algorithm Hash digest
SHA256 7e4f37970f56c504d3b94ccdc6e1ce721c3d1dee3f806733feb44a50a24c5801
MD5 9b86320a39c27e4fb9d3c7fd3adcf84a
BLAKE2b-256 05f48f052d148d9d7a1543fe96b25a14b4aaf2b2b8fedfb5309b15aa90d604de

See more details on using hashes here.

File details

Details for the file aibt_fl-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: aibt_fl-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 19.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.3

File hashes

Hashes for aibt_fl-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b7d65e4d87acf5d3702f40acac254d2d4bc3815453fcae4c0bb0f5f81a0b76b2
MD5 ad3664e0b3d8186756c3f62c291a49ed
BLAKE2b-256 e70559cadfff0615fad413ca6818d60666484a6011949698fd4bb4818671dfc5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page