Skip to main content

Decentralized AI Network

Project description

NeuroShard Logo

NeuroShard

Decentralized LLM Training Network

PyPI version Python 3.9+ License: Apache 2.0 Discord

WebsiteDocumentationWhitepaperDiscordTwitter


What is NeuroShard?

NeuroShard is a decentralized network for training large language models. Anyone can contribute GPU/CPU power and earn NEURO tokens through Proof of Neural Work.

Unlike centralized AI companies, NeuroShard distributes both the compute AND the rewards across all participants.

Key Features

Feature Description
DiLoCo Training Distributed Low-Communication training - sync every 500 steps, not every step
Byzantine Tolerance Robust gradient aggregation (Krum, Trimmed Mean) handles malicious nodes
NEURO Rewards Earn tokens for contributing compute via Proof of Neural Work
Cryptographic Proofs ECDSA-signed proofs ensure trustless verification
Web Dashboard Real-time monitoring at http://localhost:8000
P2P Network Decentralized peer discovery and gossip protocol

Quick Start

Installation

pip install neuroshard

Run a Node

# Get your token from neuroshard.com
neuroshard --token YOUR_TOKEN

That's it! Your node will:

  1. Connect to the network
  2. Start training model layers
  3. Earn NEURO for your contribution

Web Dashboard

Open http://localhost:8000 to see:

  • Node status and role
  • Training progress (DiLoCo inner/outer steps)
  • NEURO balance
  • Network statistics

System Requirements

Component Minimum Recommended
RAM 4 GB 8+ GB
Python 3.9+ 3.10+
GPU Optional NVIDIA 8GB+ VRAM

GPU Support (Optional)

For NVIDIA GPUs with CUDA:

pip install torch --index-url https://download.pytorch.org/whl/cu118

How It Works

DiLoCo Distributed Training

NeuroShard uses DiLoCo (Distributed Low-Communication) for efficient distributed training:

┌─────────────────────────────────────────────────┐
│  INNER LOOP (500 steps - no communication)      │
│  • Each node trains independently               │
│  • Local AdamW optimization                     │
└─────────────────────────────────────────────────┘
                      ↓
┌─────────────────────────────────────────────────┐
│  OUTER LOOP (sync with peers)                   │
│  • Compute pseudo-gradient: Δθ = θ₀ - θ₅₀₀     │
│  • Gossip to peers                              │
│  • Byzantine-tolerant aggregation               │
│  • Nesterov momentum update                     │
└─────────────────────────────────────────────────┘
                      ↓
              (Repeat)

This reduces network communication by 500x compared to synchronous training!

Proof of Neural Work

Nodes earn NEURO by submitting cryptographically signed proofs of their work:

  • Training batches processed
  • Inference requests served
  • Uptime contribution
  • Data samples provided

All proofs are verified using ECDSA signatures (secp256k1).


Configuration

CLI Options

neuroshard --token YOUR_TOKEN \
           --port 8000 \
           --tracker https://tracker.neuroshard.com \
           --training \
           --diloco-steps 500
Option Default Description
--token Required Your node authentication token
--port 8000 HTTP server port
--tracker Auto Tracker server URL
--training False Enable training mode
--diloco-steps 500 Inner steps before sync

See full CLI reference for all options.


Architecture

┌─────────────────────────────────────────────────────────────┐
│                      NeuroShard Node                        │
├─────────────────────────────────────────────────────────────┤
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐  │
│  │  NeuroLLM   │  │   DiLoCo    │  │  Proof of Neural    │  │
│  │  (Model)    │  │  Trainer    │  │  Work Ledger        │  │
│  └─────────────┘  └─────────────┘  └─────────────────────┘  │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐  │
│  │  P2P/DHT    │  │  Gradient   │  │  ECDSA Crypto       │  │
│  │  Network    │  │  Aggregator │  │  (secp256k1)        │  │
│  └─────────────┘  └─────────────┘  └─────────────────────┘  │
└─────────────────────────────────────────────────────────────┘

Documentation


Links

Resource Link
Website neuroshard.com
Documentation docs.neuroshard.com
Whitepaper PDF
Discord discord.gg/4R49xpj7vn
Twitter @shardneuro
PyPI pypi.org/project/neuroshard

Contributing

We welcome contributions! Please see our Contributing Guide for details.

# Clone the repo
git clone https://github.com/Nexaroa/neuroshard.git
cd neuroshard

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
pytest

License

Apache License 2.0 - see LICENSE for details.


Train AI. Earn NEURO. Own the Network.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nexaroa-0.1.25.tar.gz (434.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nexaroa-0.1.25-py3-none-any.whl (364.5 kB view details)

Uploaded Python 3

File details

Details for the file nexaroa-0.1.25.tar.gz.

File metadata

  • Download URL: nexaroa-0.1.25.tar.gz
  • Upload date:
  • Size: 434.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for nexaroa-0.1.25.tar.gz
Algorithm Hash digest
SHA256 6bfdcd24b774aa726dca03ea6ec841d769c48c7f631cc7bd048b0d69ce76393a
MD5 5a92283a9a1547c6d3a44b6c9e352308
BLAKE2b-256 967672f3399398651daa1c0f78430e63e5d96cbba69f9d339a681c89c6778d37

See more details on using hashes here.

File details

Details for the file nexaroa-0.1.25-py3-none-any.whl.

File metadata

  • Download URL: nexaroa-0.1.25-py3-none-any.whl
  • Upload date:
  • Size: 364.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for nexaroa-0.1.25-py3-none-any.whl
Algorithm Hash digest
SHA256 ccbf8eac56bb755f465a9b81ccb7e0c39f7b8b423b275f5bf4c1b08e9eca6fd5
MD5 cfd617f665c8b64ce424c72a41a3092b
BLAKE2b-256 82ee66d2399946bf860ceb9dcbb516670e50cc4d89f163a226818434212bec51

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page