Skip to main content

Decentralized AI Network

Project description

NeuroShard Logo

NeuroShard

Decentralized LLM Training Network

PyPI version Python 3.9+ License: Apache 2.0 Discord

WebsiteDocumentationWhitepaperDiscordTwitter


What is NeuroShard?

NeuroShard is a decentralized network for training large language models. Anyone can contribute GPU/CPU power and earn NEURO tokens through Proof of Neural Work.

Unlike centralized AI companies, NeuroShard distributes both the compute AND the rewards across all participants.

Key Features

Feature Description
DiLoCo Training Distributed Low-Communication training - sync every 500 steps, not every step
Byzantine Tolerance Robust gradient aggregation (Krum, Trimmed Mean) handles malicious nodes
NEURO Rewards Earn tokens for contributing compute via Proof of Neural Work
Cryptographic Proofs ECDSA-signed proofs ensure trustless verification
Web Dashboard Real-time monitoring at http://localhost:8000
P2P Network Decentralized peer discovery and gossip protocol

Quick Start

Installation

pip install neuroshard

Run a Node

# Get your token from neuroshard.com
neuroshard --token YOUR_TOKEN

That's it! Your node will:

  1. Connect to the network
  2. Start training model layers
  3. Earn NEURO for your contribution

Web Dashboard

Open http://localhost:8000 to see:

  • Node status and role
  • Training progress (DiLoCo inner/outer steps)
  • NEURO balance
  • Network statistics

System Requirements

Component Minimum Recommended
RAM 4 GB 8+ GB
Python 3.9+ 3.10+
GPU Optional NVIDIA 8GB+ VRAM

GPU Support (Optional)

For NVIDIA GPUs with CUDA:

pip install torch --index-url https://download.pytorch.org/whl/cu118

How It Works

DiLoCo Distributed Training

NeuroShard uses DiLoCo (Distributed Low-Communication) for efficient distributed training:

┌─────────────────────────────────────────────────┐
│  INNER LOOP (500 steps - no communication)      │
│  • Each node trains independently               │
│  • Local AdamW optimization                     │
└─────────────────────────────────────────────────┘
                      ↓
┌─────────────────────────────────────────────────┐
│  OUTER LOOP (sync with peers)                   │
│  • Compute pseudo-gradient: Δθ = θ₀ - θ₅₀₀     │
│  • Gossip to peers                              │
│  • Byzantine-tolerant aggregation               │
│  • Nesterov momentum update                     │
└─────────────────────────────────────────────────┘
                      ↓
              (Repeat)

This reduces network communication by 500x compared to synchronous training!

Proof of Neural Work

Nodes earn NEURO by submitting cryptographically signed proofs of their work:

  • Training batches processed
  • Inference requests served
  • Uptime contribution
  • Data samples provided

All proofs are verified using ECDSA signatures (secp256k1).


Configuration

CLI Options

neuroshard --token YOUR_TOKEN \
           --port 8000 \
           --tracker https://tracker.neuroshard.com \
           --training \
           --diloco-steps 500
Option Default Description
--token Required Your node authentication token
--port 8000 HTTP server port
--tracker Auto Tracker server URL
--training False Enable training mode
--diloco-steps 500 Inner steps before sync

See full CLI reference for all options.


Architecture

┌─────────────────────────────────────────────────────────────┐
│                      NeuroShard Node                        │
├─────────────────────────────────────────────────────────────┤
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐  │
│  │  NeuroLLM   │  │   DiLoCo    │  │  Proof of Neural    │  │
│  │  (Model)    │  │  Trainer    │  │  Work Ledger        │  │
│  └─────────────┘  └─────────────┘  └─────────────────────┘  │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐  │
│  │  P2P/DHT    │  │  Gradient   │  │  ECDSA Crypto       │  │
│  │  Network    │  │  Aggregator │  │  (secp256k1)        │  │
│  └─────────────┘  └─────────────┘  └─────────────────────┘  │
└─────────────────────────────────────────────────────────────┘

Documentation


Links

Resource Link
Website neuroshard.com
Documentation docs.neuroshard.com
Whitepaper PDF
Discord discord.gg/4R49xpj7vn
Twitter @shardneuro
PyPI pypi.org/project/neuroshard

Contributing

We welcome contributions! Please see our Contributing Guide for details.

# Clone the repo
git clone https://github.com/Nexaroa/neuroshard.git
cd neuroshard

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
pytest

License

Apache License 2.0 - see LICENSE for details.


Train AI. Earn NEURO. Own the Network.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nexaroa-0.2.32.tar.gz (522.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nexaroa-0.2.32-py3-none-any.whl (430.3 kB view details)

Uploaded Python 3

File details

Details for the file nexaroa-0.2.32.tar.gz.

File metadata

  • Download URL: nexaroa-0.2.32.tar.gz
  • Upload date:
  • Size: 522.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for nexaroa-0.2.32.tar.gz
Algorithm Hash digest
SHA256 7a7e40102a71408ff7e0d3e522934d8bf3848466599defa877b26cfb8b066751
MD5 6c23c34c65c30b859383c03d6bf756ae
BLAKE2b-256 5c0f9fc18046dd93decf2388f119d2def267e92b9cf01c746bb8dc2d8f9f4149

See more details on using hashes here.

File details

Details for the file nexaroa-0.2.32-py3-none-any.whl.

File metadata

  • Download URL: nexaroa-0.2.32-py3-none-any.whl
  • Upload date:
  • Size: 430.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for nexaroa-0.2.32-py3-none-any.whl
Algorithm Hash digest
SHA256 081cd867620d58fcef2dda04e407e351509da71283fee59edc953534ea94577e
MD5 1b28cd74a176d45051d02e6a6885084c
BLAKE2b-256 04fefa4479100f1ea42e9b8053201c146ede7c8d07c5c9f9336770f79c993deb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page