A decentralized gossip learning framework for P2P edge intelligence
Project description
QuinkGL: Decentralized Federated Learning Framework
QuinkGL is a decentralized, peer-to-peer (P2P) machine learning framework that enables training models across distributed devices without a central server. Using gossip-based protocols, nodes exchange model updates directly with each other, achieving convergence through decentralized aggregation.
Overview
Traditional federated learning relies on a central parameter server to aggregate model updates from all clients. QuinkGL eliminates this single point of failure and bottleneck by using gossip protocols—each node communicates with a random subset of peers each round, and updates propagate through the network organically.
Key Features
| Feature | Description |
|---|---|
| Decentralized | No central server required |
| Gossip Learning | Random walk and gossip-based aggregation |
| P2P Networking | IPv8 with NAT traversal (UDP hole punching) |
| Tunnel Fallback | Automatic relay server when direct P2P fails |
| Framework Agnostic | PyTorch, TensorFlow, or custom models |
| Multiple Strategies | Topology and aggregation strategies |
| Byzantine Resilient | Built-in robust aggregation options |
| Monitoring | Optional MCP integration for AI assistants |
Installation
pip install quinkgl
For development:
git clone https://github.com/aliseyhann/QuinkGL-Gossip-Learning-Framework.git
cd QuinkGL-Gossip-Learning-Framework
pip install -e .
Quick Start
import asyncio
import torch.nn as nn
from quinkgl import GossipNode, PyTorchModel
# 1. Define your model
class SimpleNet(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 128)
self.fc2 = nn.Linear(128, 10)
self.relu = nn.ReLU()
def forward(self, x):
x = x.view(x.size(0), -1)
return self.fc2(self.relu(self.fc1(x)))
# 2. Wrap the model
model = PyTorchModel(SimpleNet(), device="cpu")
# 3. Create and run the node
async def main():
node = GossipNode(
node_id="alice",
domain="mnist",
model=model,
port=7000
)
await node.start()
await node.run_continuous(training_data)
await node.shutdown()
asyncio.run(main())
Architecture
┌─────────────────────────────────────────────────────────────┐
│ GossipNode │
│ (Production-ready node with P2P networking + fallback) │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌──────────────┐ ┌───────────────────┐ │
│ │ PyTorchModel│ │RandomTopology│ │ FedAvg │ │
│ │ TensorFlow │ │ CyclonTopology│ FedProx │ FedAvgM │ │
│ └─────────────┘ └──────────────┘ TrimmedMean │ Krum │ │
├─────────────────────────────────────────────────────────────┤
│ ┌───────────────────────────────────────────────────────┐ │
│ │ ModelAggregator │ │
│ │ (Train → Gossip → Aggregate Cycle) │ │
│ └───────────────────────────────────────────────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ ┌───────────────────────────────────────────────────────┐ │
│ │ IPv8 Network Layer + Tunnel Fallback │ │
│ │ (P2P, NAT Traversal, UDP, Relay) │ │
│ └───────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
│ (Optional MCP Monitoring)
▼
┌─────────────────────────────────────────────────────────────┐
│ MetricsCollector ──► MCPServer ──► AI Assistant (Claude) │
└─────────────────────────────────────────────────────────────┘
Project Structure
QuinkGL/
├── src/quinkgl/
│ ├── core/
│ │ └── learning_node.py # Framework node (network-agnostic)
│ ├── models/
│ │ ├── base.py # ModelWrapper, TrainingConfig
│ │ ├── pytorch.py # PyTorch implementation
│ │ └── tensorflow.py # TensorFlow implementation
│ ├── topology/
│ │ ├── base.py # TopologyStrategy base
│ │ ├── random.py # RandomTopology
│ │ └── cyclon.py # CyclonTopology
│ ├── aggregation/
│ │ ├── base.py # AggregationStrategy base
│ │ ├── fedavg.py # FedAvg implementation
│ │ ├── fedprox.py # FedProx implementation
│ │ ├── fedavgm.py # FedAvgM implementation
│ │ ├── trimmed_mean.py # TrimmedMean implementation
│ │ └── krum.py # Krum and MultiKrum implementations
│ ├── gossip/
│ │ ├── protocol.py # Message types, validation
│ │ └── aggregator.py # Training→gossip→aggregation cycle
│ ├── network/
│ │ ├── gossip_node.py # GossipNode (IPv8 + tunnel)
│ │ ├── ipv8_manager.py # IPv8 wrapper
│ │ └── gossip_community.py # IPv8 community
│ ├── storage/
│ │ └── model_store.py # Model checkpointing
│ ├── serialization/
│ │ └── weights.py # Model weight serialization helpers
│ ├── data/
│ │ └── datasets.py # Data loading utilities
│ └── mcp/
│ ├── metrics.py # Metrics collection
│ └── server.py # MCP server
└── tests/
Package Responsibilities
core: public node abstraction without transport concernsgossip: round orchestration and protocol primitivesnetwork: IPv8 transport and wire deliveryaggregation: model merge strategiestopology: peer selection and partial-view managementserialization: model weight serialization helpers
Public API Overview
Core Classes
| Class | Description |
|---|---|
LearningNode |
Framework node without networking |
GossipNode |
Production node with P2P + tunnel fallback |
GLNode |
Backward compatibility alias for LearningNode |
Use LearningNode when you control the transport layer yourself.
Use GossipNode when you want the built-in IPv8 networking stack.
Model Wrappers
| Class | Description |
|---|---|
PyTorchModel |
Wrapper for PyTorch nn.Module |
TensorFlowModel |
Wrapper for TensorFlow/Keras models |
ModelWrapper |
Base class for custom wrappers |
TrainingConfig |
Training configuration (epochs, batch_size, lr, grad_clip) |
TrainingResult |
Training result with metrics |
Topology Strategies
| Class | Description |
|---|---|
RandomTopology |
Random peer selection each round |
CyclonTopology |
Shuffling for network exploration |
PeerInfo |
Peer information dataclass |
SelectionContext |
Context for peer selection |
Aggregation Strategies
| Strategy | Type | Description |
|---|---|---|
FedAvg |
Standard | Weighted averaging by sample count |
FedProx |
Advanced | Proximal term for heterogeneous data |
FedAvgM |
Advanced | Server momentum for stability |
TrimmedMean |
Byzantine | Remove extreme values |
Krum |
Byzantine | Select most central update |
MultiKrum |
Byzantine | Average most central updates |
Data Utilities
The quinkgl.data helpers are not part of the documented public API in
this repository snapshot. The package root may expose placeholders for
DatasetLoader, FederatedDataSplitter, and DatasetInfo, but they are
not guaranteed to resolve to usable imports in the current build.
Monitoring (Optional)
| Class | Description |
|---|---|
MetricsCollector |
Collect metrics from nodes |
MCPServer |
MCP server for AI assistant integration |
Documentation
| Document | Description |
|---|---|
| QUINKGL_FRAMEWORK.md | Complete user guide with all features and examples |
MCP Integration
QuinkGL includes built-in support for MCP (Model Context Protocol), enabling AI assistants like Claude to monitor your experiments:
from quinkgl import MetricsCollector, MCPServer
collector = MetricsCollector()
collector.attach(gossip_node)
server = MCPServer(collector)
await server.start()
Available MCP Tools:
get_nodes– List all monitored nodesget_training_progress– Training status per nodeget_accuracy_history– Accuracy over roundsget_model_exchanges– Model send/receive recordsanalyze_convergence– Network convergence analysis
Requirements
- Python 3.9+
- PyTorch 1.9+ (optional, for PyTorchModel)
- TensorFlow 2.x (optional, for TensorFlowModel)
- IPv8 2.0+ (for P2P networking)
License
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Copyright 2026 Ali Seyhan, Baki Turhan
Contributing
Contributions are welcome! Please read our contributing guidelines and submit pull requests to the main repository.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file quinkgl-0.2.6.tar.gz.
File metadata
- Download URL: quinkgl-0.2.6.tar.gz
- Upload date:
- Size: 147.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
325cb4c37fcab788881cee63771a74566d80aa62a138d3f06337eff231a87598
|
|
| MD5 |
6e8c85383881641647c45e99dc4ec50b
|
|
| BLAKE2b-256 |
8b71e818b00550e5d49eedb82cd4cf8e45b661fb62b7aa478897317575ea2558
|
File details
Details for the file quinkgl-0.2.6-py3-none-any.whl.
File metadata
- Download URL: quinkgl-0.2.6-py3-none-any.whl
- Upload date:
- Size: 107.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
867eddb5157a32482b340717a712c3a237dbfbc6a495f1fa615aa7f22ae2fad6
|
|
| MD5 |
8417fe50dcacb7625c8f3bff4eead394
|
|
| BLAKE2b-256 |
dcfd9365089dcd749b435131c29338cc4a6885cf67357cf1c3745df4eabade14
|