Skip to main content

Decentralized AI Network

Project description

NeuroShard Logo

NeuroShard

Decentralized LLM Training Network

PyPI version Python 3.9+ License: Apache 2.0 Discord

WebsiteDocumentationWhitepaperDiscordTwitter


What is NeuroShard?

NeuroShard is a decentralized network for training large language models. Anyone can contribute GPU/CPU power and earn NEURO tokens through Proof of Neural Work.

Unlike centralized AI companies, NeuroShard distributes both the compute AND the rewards across all participants.

Key Features

Feature Description
DiLoCo Training Distributed Low-Communication training - sync every 500 steps, not every step
Byzantine Tolerance Robust gradient aggregation (Krum, Trimmed Mean) handles malicious nodes
NEURO Rewards Earn tokens for contributing compute via Proof of Neural Work
Cryptographic Proofs ECDSA-signed proofs ensure trustless verification
Web Dashboard Real-time monitoring at http://localhost:8000
P2P Network Decentralized peer discovery and gossip protocol

Quick Start

Installation

pip install neuroshard

Run a Node

# Get your token from neuroshard.com
neuroshard --token YOUR_TOKEN

That's it! Your node will:

  1. Connect to the network
  2. Start training model layers
  3. Earn NEURO for your contribution

Web Dashboard

Open http://localhost:8000 to see:

  • Node status and role
  • Training progress (DiLoCo inner/outer steps)
  • NEURO balance
  • Network statistics

System Requirements

Component Minimum Recommended
RAM 4 GB 8+ GB
Python 3.9+ 3.10+
GPU Optional NVIDIA 8GB+ VRAM

GPU Support (Optional)

For NVIDIA GPUs with CUDA:

pip install torch --index-url https://download.pytorch.org/whl/cu118

How It Works

DiLoCo Distributed Training

NeuroShard uses DiLoCo (Distributed Low-Communication) for efficient distributed training:

┌─────────────────────────────────────────────────┐
│  INNER LOOP (500 steps - no communication)      │
│  • Each node trains independently               │
│  • Local AdamW optimization                     │
└─────────────────────────────────────────────────┘
                      ↓
┌─────────────────────────────────────────────────┐
│  OUTER LOOP (sync with peers)                   │
│  • Compute pseudo-gradient: Δθ = θ₀ - θ₅₀₀     │
│  • Gossip to peers                              │
│  • Byzantine-tolerant aggregation               │
│  • Nesterov momentum update                     │
└─────────────────────────────────────────────────┘
                      ↓
              (Repeat)

This reduces network communication by 500x compared to synchronous training!

Proof of Neural Work

Nodes earn NEURO by submitting cryptographically signed proofs of their work:

  • Training batches processed
  • Inference requests served
  • Uptime contribution
  • Data samples provided

All proofs are verified using ECDSA signatures (secp256k1).


Configuration

CLI Options

neuroshard --token YOUR_TOKEN \
           --port 8000 \
           --tracker https://tracker.neuroshard.com \
           --training \
           --diloco-steps 500
Option Default Description
--token Required Your node authentication token
--port 8000 HTTP server port
--tracker Auto Tracker server URL
--training False Enable training mode
--diloco-steps 500 Inner steps before sync

See full CLI reference for all options.


Architecture

┌─────────────────────────────────────────────────────────────┐
│                      NeuroShard Node                        │
├─────────────────────────────────────────────────────────────┤
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐  │
│  │  NeuroLLM   │  │   DiLoCo    │  │  Proof of Neural    │  │
│  │  (Model)    │  │  Trainer    │  │  Work Ledger        │  │
│  └─────────────┘  └─────────────┘  └─────────────────────┘  │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐  │
│  │  P2P/DHT    │  │  Gradient   │  │  ECDSA Crypto       │  │
│  │  Network    │  │  Aggregator │  │  (secp256k1)        │  │
│  └─────────────┘  └─────────────┘  └─────────────────────┘  │
└─────────────────────────────────────────────────────────────┘

Documentation


Links

Resource Link
Website neuroshard.com
Documentation docs.neuroshard.com
Whitepaper PDF
Discord discord.gg/4R49xpj7vn
Twitter @shardneuro
PyPI pypi.org/project/neuroshard

Contributing

We welcome contributions! Please see our Contributing Guide for details.

# Clone the repo
git clone https://github.com/Nexaroa/neuroshard.git
cd neuroshard

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
pytest

License

Apache License 2.0 - see LICENSE for details.


Train AI. Earn NEURO. Own the Network.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nexaroa-0.2.74.tar.gz (563.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nexaroa-0.2.74-py3-none-any.whl (471.9 kB view details)

Uploaded Python 3

File details

Details for the file nexaroa-0.2.74.tar.gz.

File metadata

  • Download URL: nexaroa-0.2.74.tar.gz
  • Upload date:
  • Size: 563.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for nexaroa-0.2.74.tar.gz
Algorithm Hash digest
SHA256 8524c00a2cf5fca5a37c3bc602646c827ee9e591a2c3c308433b27156fa8f18d
MD5 032dd953413f458eb2a98197842f2ad2
BLAKE2b-256 168cfb379423febf982b1becf5be0cfad6331fd1dbac99cec64437ac319f3177

See more details on using hashes here.

File details

Details for the file nexaroa-0.2.74-py3-none-any.whl.

File metadata

  • Download URL: nexaroa-0.2.74-py3-none-any.whl
  • Upload date:
  • Size: 471.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for nexaroa-0.2.74-py3-none-any.whl
Algorithm Hash digest
SHA256 a7883e1eabfce6c8063bf6a10a1a09ed7f2ee9453b46977a038c8e90335aa705
MD5 165a1f63ad3ec7a31381a183dd300e2b
BLAKE2b-256 22fe03c387deef77bbd4c5cfed48c4a694a7fa82038640a9f462dec5fe3602a7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page