Decentralized AI Network
Project description
NeuroShard
Decentralized LLM Training Network
Website • Documentation • Whitepaper • Discord • Twitter
What is NeuroShard?
NeuroShard is a decentralized network for training large language models. Anyone can contribute GPU/CPU power and earn NEURO tokens through Proof of Neural Work.
Unlike centralized AI companies, NeuroShard distributes both the compute AND the rewards across all participants.
Key Features
| Feature | Description |
|---|---|
| DiLoCo Training | Distributed Low-Communication training - sync every 500 steps, not every step |
| Byzantine Tolerance | Robust gradient aggregation (Krum, Trimmed Mean) handles malicious nodes |
| NEURO Rewards | Earn tokens for contributing compute via Proof of Neural Work |
| Cryptographic Proofs | ECDSA-signed proofs ensure trustless verification |
| Web Dashboard | Real-time monitoring at http://localhost:8000 |
| P2P Network | Decentralized peer discovery and gossip protocol |
Quick Start
Installation
pip install neuroshard
Run a Node
# Get your token from neuroshard.com
neuroshard --token YOUR_TOKEN
That's it! Your node will:
- Connect to the network
- Start training model layers
- Earn NEURO for your contribution
Web Dashboard
Open http://localhost:8000 to see:
- Node status and role
- Training progress (DiLoCo inner/outer steps)
- NEURO balance
- Network statistics
System Requirements
| Component | Minimum | Recommended |
|---|---|---|
| RAM | 4 GB | 8+ GB |
| Python | 3.9+ | 3.10+ |
| GPU | Optional | NVIDIA 8GB+ VRAM |
GPU Support (Optional)
For NVIDIA GPUs with CUDA:
pip install torch --index-url https://download.pytorch.org/whl/cu118
How It Works
DiLoCo Distributed Training
NeuroShard uses DiLoCo (Distributed Low-Communication) for efficient distributed training:
┌─────────────────────────────────────────────────┐
│ INNER LOOP (500 steps - no communication) │
│ • Each node trains independently │
│ • Local AdamW optimization │
└─────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────┐
│ OUTER LOOP (sync with peers) │
│ • Compute pseudo-gradient: Δθ = θ₀ - θ₅₀₀ │
│ • Gossip to peers │
│ • Byzantine-tolerant aggregation │
│ • Nesterov momentum update │
└─────────────────────────────────────────────────┘
↓
(Repeat)
This reduces network communication by 500x compared to synchronous training!
Proof of Neural Work
Nodes earn NEURO by submitting cryptographically signed proofs of their work:
- Training batches processed
- Inference requests served
- Uptime contribution
- Data samples provided
All proofs are verified using ECDSA signatures (secp256k1).
Configuration
CLI Options
neuroshard --token YOUR_TOKEN \
--port 8000 \
--tracker https://tracker.neuroshard.com \
--training \
--diloco-steps 500
| Option | Default | Description |
|---|---|---|
--token |
Required | Your node authentication token |
--port |
8000 | HTTP server port |
--tracker |
Auto | Tracker server URL |
--training |
False | Enable training mode |
--diloco-steps |
500 | Inner steps before sync |
See full CLI reference for all options.
Architecture
┌─────────────────────────────────────────────────────────────┐
│ NeuroShard Node │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ NeuroLLM │ │ DiLoCo │ │ Proof of Neural │ │
│ │ (Model) │ │ Trainer │ │ Work Ledger │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ P2P/DHT │ │ Gradient │ │ ECDSA Crypto │ │
│ │ Network │ │ Aggregator │ │ (secp256k1) │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Documentation
- Whitepaper - Technical whitepaper (PDF)
- Getting Started - First steps
- Running a Node - Detailed setup
- Architecture - System design
- Economics - NEURO tokenomics
- API Reference - SDK & endpoints
Links
| Resource | Link |
|---|---|
| Website | neuroshard.com |
| Documentation | docs.neuroshard.com |
| Whitepaper | |
| Discord | discord.gg/4R49xpj7vn |
| @shardneuro | |
| PyPI | pypi.org/project/neuroshard |
Contributing
We welcome contributions! Please see our Contributing Guide for details.
# Clone the repo
git clone https://github.com/Nexaroa/neuroshard.git
cd neuroshard
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
License
Apache License 2.0 - see LICENSE for details.
Train AI. Earn NEURO. Own the Network.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file nexaroa-0.0.120.tar.gz.
File metadata
- Download URL: nexaroa-0.0.120.tar.gz
- Upload date:
- Size: 408.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f5321e5bbd8c92e0b942cfece2322c1eb528b6c62256ffda0b7619f821eea865
|
|
| MD5 |
497bd3af9832df4130cdea6124efd6ba
|
|
| BLAKE2b-256 |
e6bfa0b7ec6da1c3b1150c4acddcd63697deb19e78adae575ecd4da4df284b0e
|
File details
Details for the file nexaroa-0.0.120-py3-none-any.whl.
File metadata
- Download URL: nexaroa-0.0.120-py3-none-any.whl
- Upload date:
- Size: 380.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ab9c4164fba0021df250b887763dcc6da7257b9ff9159c2eb6859d99c129609a
|
|
| MD5 |
9dc1cbc66ea3f388d62da0a2bed8ef0e
|
|
| BLAKE2b-256 |
0896bc16816ffcbbaf9675ef3e152f77f09e8318c2215133b23c96503d71ef9d
|