Skip to main content

Decentralized AI Network

Project description

NeuroShard Logo

NeuroShard

Decentralized LLM Training Network

PyPI version Python 3.9+ License: Apache 2.0 Discord

WebsiteDocumentationWhitepaperDiscordTwitter


What is NeuroShard?

NeuroShard is a decentralized network for training large language models. Anyone can contribute GPU/CPU power and earn NEURO tokens through Proof of Neural Work.

Unlike centralized AI companies, NeuroShard distributes both the compute AND the rewards across all participants.

Key Features

Feature Description
DiLoCo Training Distributed Low-Communication training - sync every 500 steps, not every step
Byzantine Tolerance Robust gradient aggregation (Krum, Trimmed Mean) handles malicious nodes
NEURO Rewards Earn tokens for contributing compute via Proof of Neural Work
Cryptographic Proofs ECDSA-signed proofs ensure trustless verification
Web Dashboard Real-time monitoring at http://localhost:8000
P2P Network Decentralized peer discovery and gossip protocol

Quick Start

Installation

pip install neuroshard

Run a Node

# Get your token from neuroshard.com
neuroshard --token YOUR_TOKEN

That's it! Your node will:

  1. Connect to the network
  2. Start training model layers
  3. Earn NEURO for your contribution

Web Dashboard

Open http://localhost:8000 to see:

  • Node status and role
  • Training progress (DiLoCo inner/outer steps)
  • NEURO balance
  • Network statistics

System Requirements

Component Minimum Recommended
RAM 4 GB 8+ GB
Python 3.9+ 3.10+
GPU Optional NVIDIA 8GB+ VRAM

GPU Support (Optional)

For NVIDIA GPUs with CUDA:

pip install torch --index-url https://download.pytorch.org/whl/cu118

How It Works

DiLoCo Distributed Training

NeuroShard uses DiLoCo (Distributed Low-Communication) for efficient distributed training:

┌─────────────────────────────────────────────────┐
│  INNER LOOP (500 steps - no communication)      │
│  • Each node trains independently               │
│  • Local AdamW optimization                     │
└─────────────────────────────────────────────────┘
                      ↓
┌─────────────────────────────────────────────────┐
│  OUTER LOOP (sync with peers)                   │
│  • Compute pseudo-gradient: Δθ = θ₀ - θ₅₀₀     │
│  • Gossip to peers                              │
│  • Byzantine-tolerant aggregation               │
│  • Nesterov momentum update                     │
└─────────────────────────────────────────────────┘
                      ↓
              (Repeat)

This reduces network communication by 500x compared to synchronous training!

Proof of Neural Work

Nodes earn NEURO by submitting cryptographically signed proofs of their work:

  • Training batches processed
  • Inference requests served
  • Uptime contribution
  • Data samples provided

All proofs are verified using ECDSA signatures (secp256k1).


Configuration

CLI Options

neuroshard --token YOUR_TOKEN \
           --port 8000 \
           --tracker https://tracker.neuroshard.com \
           --training \
           --diloco-steps 500
Option Default Description
--token Required Your node authentication token
--port 8000 HTTP server port
--tracker Auto Tracker server URL
--training False Enable training mode
--diloco-steps 500 Inner steps before sync

See full CLI reference for all options.


Architecture

┌─────────────────────────────────────────────────────────────┐
│                      NeuroShard Node                        │
├─────────────────────────────────────────────────────────────┤
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐  │
│  │  NeuroLLM   │  │   DiLoCo    │  │  Proof of Neural    │  │
│  │  (Model)    │  │  Trainer    │  │  Work Ledger        │  │
│  └─────────────┘  └─────────────┘  └─────────────────────┘  │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐  │
│  │  P2P/DHT    │  │  Gradient   │  │  ECDSA Crypto       │  │
│  │  Network    │  │  Aggregator │  │  (secp256k1)        │  │
│  └─────────────┘  └─────────────┘  └─────────────────────┘  │
└─────────────────────────────────────────────────────────────┘

Documentation


Links

Resource Link
Website neuroshard.com
Documentation docs.neuroshard.com
Whitepaper PDF
Discord discord.gg/4R49xpj7vn
Twitter @shardneuro
PyPI pypi.org/project/neuroshard

Contributing

We welcome contributions! Please see our Contributing Guide for details.

# Clone the repo
git clone https://github.com/Nexaroa/neuroshard.git
cd neuroshard

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
pytest

License

Apache License 2.0 - see LICENSE for details.


Train AI. Earn NEURO. Own the Network.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nexaroa-0.0.125.tar.gz (409.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nexaroa-0.0.125-py3-none-any.whl (381.1 kB view details)

Uploaded Python 3

File details

Details for the file nexaroa-0.0.125.tar.gz.

File metadata

  • Download URL: nexaroa-0.0.125.tar.gz
  • Upload date:
  • Size: 409.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for nexaroa-0.0.125.tar.gz
Algorithm Hash digest
SHA256 fd7f604f3f74c7d65984c54ac9c5cb2333da3ebb37cbbdbd946eb12ac3e9fdcc
MD5 1bfe9094cc0d91c769e180e080fa6cfe
BLAKE2b-256 e9acc18e4e87296fe4249860b75a2ecb0e09b09554e9c32c3b74ff7cc8644ee2

See more details on using hashes here.

File details

Details for the file nexaroa-0.0.125-py3-none-any.whl.

File metadata

  • Download URL: nexaroa-0.0.125-py3-none-any.whl
  • Upload date:
  • Size: 381.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for nexaroa-0.0.125-py3-none-any.whl
Algorithm Hash digest
SHA256 1b4c1a8da0634e61619fa98bf93891c5949d041045822a1d14581fa0c026318d
MD5 118be9b56fc28d7c7dfecdddcfc1ea0c
BLAKE2b-256 f54ba967a02b736ae28d600990bf3a9a0800a02a4661d97ba1f32e559d75fb3b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page