Real-Time Neural Network Visualizer for PyTorch
Project description
nn_live — Real-Time Neural Network Visualizer
Test colab link - https://colab.research.google.com/drive/1QdJB_BhDSFmEgEnTbHOLgQyl151ax2DS#scrollTo=ot-QGgSYJ-3Q
Watch your neural network learn in real-time.
• Live neuron activations • Weight flow visualization • Works directly with PyTorch • Opens in browser automatically
Why nn_live?
Most tools like TensorBoard show numbers.
nn_live lets you SEE:
- Which neurons are actually learning
- How signals flow through your network
- Why your model is failing or succeeding
This turns neural networks from a black box into something you can understand visually.
Comparison
| Tool | Real-Time | PyTorch | Visual |
|---|---|---|---|
| TensorBoard | ❌ | ✔ | ❌ |
| TensorFlow Playground | ✔ | ❌ | ✔ |
| nn_live | ✔ | ✔ | ✔ |
Features
| Feature | Description |
|---|---|
| Live Nodes | Neurons glow cyan (positive) or red (negative) with intensity scaled to activation strength |
| Weight Lines | Connections colored and sized by weight magnitude. Weak weights auto-fade to reduce clutter |
| Flow Particles | Animated signal particles travel left to right along edges; speed scales with weight strength |
| Bias Pills | Small badges below each node show the neuron's bias value at a glance |
| Stats HUD | Live Epoch, Loss, Accuracy, Learning Rate, Optimizer, and Momentum panel |
| Hover Tooltips | Hover over any neuron to see its exact activation, bias, and layer info |
| Focus Mode | Click a neuron to highlight only its connections. Everything else dims to near-black |
| Controls | Toggle normalization, particle flow, adjust animation speed and neuron cap in real-time |
| Activation Badges | Auto-detects activation functions (ReLU, Sigmoid, Tanh, etc.) — scans past BatchNorm/Dropout |
| Dropout Visualization | Dropped neurons turn orange with a strike-through every forward pass |
| BatchNorm Badge | Layers with BatchNorm1d/2d/3d display a purple ⚖ BatchNorm badge |
| Gradient Norm Bars | Colored bar at the bottom of each layer: 🟢 healthy · 🟡 vanishing · 🔴 exploding |
| Optimizer Panel | Pass your optimizer to Visualizer to see LR, type, momentum, betas and weight decay live |
| Safety Limits | Automatically caps large networks to protect browser performance, with clear warnings |
Installation
Install from PyPI (Recommended)
pip install nn_live
Install from Source (Latest / Development)
git clone https://github.com/ankit3890/nn_live.git
cd nn_live
pip install -e .
Verify Installation
import nn_live
print(nn_live.__version__) # e.g. 0.1.0
⚡ Get Started in 10 Seconds
from nn_live import Visualizer
viz = Visualizer(model)
Quick Start
Requirement: Your model must use
nn.Linearlayers. The visualizer is optimized for Dense/Fully Connected networks.
import torch
import torch.nn as nn
from nn_live import Visualizer
# 1. Define your model
class ANN(nn.Module):
def __init__(self, num_features):
super().__init__()
self.network = nn.Sequential(
nn.Linear(num_features, 128),
nn.BatchNorm1d(128),
nn.ReLU(),
nn.Dropout(p=0.3),
nn.Linear(128, 64),
nn.BatchNorm1d(64),
nn.ReLU(),
nn.Dropout(p=0.3),
nn.Linear(64, 10),
)
def forward(self, features):
return self.network(features)
num_features = 784
model = ANN(num_features)
# 2. Set up optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
loss_fn = nn.CrossEntropyLoss()
# 3. Attach the visualizer — pass optimizer to see LR / hyperparams live
viz = Visualizer(model, port=8000, optimizer=optimizer)
# 4. Train and push live updates each batch
for epoch in range(50):
inputs = torch.rand(32, num_features)
targets = torch.randint(0, 10, (32,))
outputs = model(inputs)
loss = loss_fn(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
viz.step(epoch=epoch + 1, loss=loss)
print(f'Epoch: {epoch + 1}, Loss: {loss.item():.4f}')
Reading the Dashboard
Once the dashboard is open in your browser, here is a complete guide to every visual element.
Nodes (Neurons)
Each circle on the canvas represents a single neuron.
| Visual Property | What it means |
|---|---|
| Number inside the node | The Activation (Act) — the value this neuron outputs to the next layer |
| Cyan color | Activation is positive |
| Red color | Activation is negative |
| Glow brightness | How large the activation is — brighter glow means the neuron is firing strongly |
| Scale on hover | Node smoothly enlarges 1.25x when hovered to reveal details clearly |
What is "Activation" (Act)?
The Activation is the output value of a neuron — what it passes forward to the next layer:
Activation = f( sum(weight * input) + bias )
Where f is your activation function (ReLU, Sigmoid, Tanh, etc.).
Examples from the dashboard:
| Node appearance | What it means |
|---|---|
Bright cyan, 0.85 |
Strongly activated — large positive signal flowing forward |
Dark, near 0.00 |
Barely contributing — neuron is essentially silent |
Bright red, -0.72 |
Strongly negative — actively suppressing the next layer |
Bias Pill (b: -0.04)
The small rounded badge below each node shows the neuron's Bias value.
Bias shifts the activation threshold independently of the inputs. A neuron with a large negative bias is much harder to activate, even with strong inputs. As training progresses, watch these values shift as the model learns.
Connection Lines (Weights)
The lines between nodes represent the weights of the network.
| Visual Property | What it means |
|---|---|
| Cyan line | Positive weight — amplifies the signal |
| Red line | Negative weight — suppresses the signal |
| Line thickness | Proportional to weight magnitude — thick = strong, thin = weak |
| Near-invisible line | Weight magnitude < 0.1 — dimmed heavily to remove visual noise |
| Moving particles | Signal flow direction (left to right). Particle speed = weight strength |
Stats Panel (Top Right)
| Field | What it means |
|---|---|
| Epoch | Current training epoch, passed via viz.step(epoch=...) |
| Loss | Training loss — watch this decrease as your model learns |
| Accuracy | Classification accuracy (0.0 to 1.0), passed via viz.step(accuracy=...) |
All three fields are optional. Pass only what you have — unused fields simply display
-.
Optimizer Hyperparameter Panel
Pass your optimizer to Visualizer(model, optimizer=optimizer) to unlock extra rows in the stats panel:
| Field | Optimizer | What it means |
|---|---|---|
| Optimizer | All | Optimizer class name (e.g. Adam, SGD) |
| LR | All | Current learning rate — auto-updates when using a LR scheduler |
| Momentum | SGD | Momentum value |
| Betas | Adam / AdamW | Beta1 and Beta2 coefficients |
| Weight Decay | All | L2 regularization strength |
These rows are hidden until an optimizer is passed. They appear automatically on the first
step()call.
Hover Tooltip
Hovering over any neuron shows a detailed tooltip:
- Layer name and Neuron index
Act:— Activation value with a visual magnitude barBias:— Bias value with a visual magnitude bar- A hint to click for Focus Mode
Focus Mode (Click a Node)
Click any neuron to lock focus on it.
- The entire network dims to near-black
- Only the incoming and outgoing connections for that neuron remain highlighted
- Click the same neuron again, or click empty canvas space, to unlock
This is useful for understanding the exact role of a single neuron and tracing which neurons influence it.
Activation Function Badges
nn_live automatically detects the activation function applied after each layer and renders a pill-shaped badge (e.g. ReLU, Sigmoid) between the corresponding layer columns on the canvas.
Two detection strategies are used:
-
Module-based — Detects activations defined as
nn.Moduleattributes in__init__:self.relu = nn.ReLU() # detected self.sigmoid = nn.Sigmoid() # detected
-
Graph tracing — Uses
torch.fxto trace theforward()method and detect functional calls:x = torch.relu(x) # detected x = torch.sigmoid(x) # detected x = F.relu(x) # detected x = x.sigmoid() # detected
Important: use unique attribute names. If you reuse the same attribute name for multiple activations, PyTorch only registers the last one:
# Wrong — PyTorch overwrites self.sigmoid each time, only the last survives
self.sigmoid = nn.Sigmoid()
self.sigmoid = nn.Sigmoid()
# Correct — each activation has a unique name
self.sigmoid1 = nn.Sigmoid()
self.sigmoid2 = nn.Sigmoid()
If no activation is detected for a layer gap (e.g. the model uses an unsupported activation or complex control flow), no badge is drawn — no visual clutter.
nn_livescans forward past BatchNorm and Dropout to find the activation function, soLinear → BatchNorm → ReLU → Dropoutis correctly detected asReLU.
Dropout Visualization
When your model includes nn.Dropout, nn_live hooks into each Dropout layer and tracks which neurons are dropped every forward pass.
| Visual | What it means |
|---|---|
| Orange neuron | This neuron was zeroed by Dropout in the current forward pass |
| Strike-through line | Visual indicator that the neuron is switched off |
| Dimmed edges | Connections to/from dropped neurons are faded to near-invisible |
In
model.eval()mode, Dropout is disabled — all neurons revert to their normal cyan/red colors.
Gradient Norm Bars
A colored progress bar appears at the bottom of each layer column showing the L2 norm of that layer's weight gradients (available after loss.backward()).
| Bar Color | Gradient Norm | What it means |
|---|---|---|
| 🟢 Green | 1e-4 to 10 |
Healthy — layer is learning well |
| 🟡 Yellow | < 1e-4 |
Vanishing gradient — updates barely reach this layer |
| 🔴 Red | > 10 |
Exploding gradient — unstable, consider gradient clipping |
The number displayed (e.g. ∇ 1.509) is the exact gradient L2 norm for that layer.
Call
viz.step()afterloss.backward()and beforeoptimizer.zero_grad()to ensure gradients are readable.
Architecture Examples
nn_live tracks all nn.Linear layers. For advanced architectures (CNNs, RNNs), it skips spatial and recurrent layers and visualizes only the final Dense classification layers — preventing browser overload from massive feature maps.
1. The Perceptron (Single Neuron)
class Perceptron(nn.Module):
def __init__(self):
super().__init__()
self.layer = nn.Linear(3, 1) # 3 inputs -> 1 output
def forward(self, x):
return torch.sigmoid(self.layer(x))
viz = Visualizer(Perceptron())
2. Standard ANN (Multi-Layer Perceptron)
class ANN(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(10, 8)
self.fc2 = nn.Linear(8, 6)
self.fc3 = nn.Linear(6, 4)
self.out = nn.Linear(4, 1)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = torch.relu(self.fc3(x))
return torch.sigmoid(self.out(x))
viz = Visualizer(ANN())
3. CNN (Convolutional Neural Network)
nn_live ignores nn.Conv2d layers and visualizes only the final nn.Linear classification head.
class SimpleCNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 4, kernel_size=3) # ignored by visualizer
self.flatten = nn.Flatten()
self.fc1 = nn.Linear(16, 5) # visualized
self.fc2 = nn.Linear(5, 2) # visualized
def forward(self, x):
x = torch.relu(self.conv1(x))
x = self.flatten(x)
x = torch.relu(self.fc1(x))
return self.fc2(x)
model = SimpleCNN()
viz = Visualizer(model)
dummy_image = torch.rand(1, 1, 6, 6)
output = model(dummy_image)
viz.step()
4. RNN (Recurrent Neural Network)
nn_live ignores nn.RNN / nn.LSTM layers and visualizes only the final nn.Linear readout layers.
class SimpleRNN(nn.Module):
def __init__(self, input_size=10, hidden_size=8):
super().__init__()
self.rnn = nn.RNN(input_size, hidden_size, batch_first=True) # ignored
self.fc1 = nn.Linear(hidden_size, 4) # visualized
self.fc2 = nn.Linear(4, 1) # visualized
def forward(self, x):
out, _ = self.rnn(x)
x = torch.relu(self.fc1(out[:, -1, :]))
return torch.sigmoid(self.fc2(x))
model = SimpleRNN()
viz = Visualizer(model)
dummy_sequence = torch.rand(1, 5, 10)
output = model(dummy_sequence)
viz.step()
Limitations
Best for: ✔ Fully Connected Networks (MLPs)
Partial support:
⚠ CNNs → only dense layers visualized
⚠ RNNs → only output layers visualized
Safety Limits
nn_live automatically detects when your model contains large layers and gives you full control over how they are rendered in the browser.
Warning Thresholds
| Threshold | Behavior |
|---|---|
| Any layer > 32 neurons | A performance warning is printed in Jupyter output |
| Any layer > 32 neurons | A modal dialog appears in the browser the first time data arrives |
How it works
In Jupyter, a warning is printed at startup:
nn_live: Live Visualizer started at http://127.0.0.1:8000
[PERF WARNING] Input layer has 100 neurons. The browser will ask you whether
to render all neurons or cap the display for performance.
In the browser, a dialog appears the first time the large network is detected:
Large Network Detected One or more layers in your model are large. Rendering all neurons may slow down or crash your browser.
What would you like to do?
[Show All Neurons][Cap at 64 (Recommended)]
- Choosing Show All Neurons renders every neuron in the layer. Node sizes dynamically shrink so all neurons fit within the visible canvas height.
- Choosing Cap at 32 limits the display to 32 neurons per layer (adjustable via the slider). The layer header will show
32 / 128 Neurons (capped)so it is always clear that the view is partial.
The Neuron Cap Slider (4–128) in the Controls panel lets you adjust the cap in real-time without restarting.
The dialog appears only once per browser session and does not affect your Python training loop in any way. Full weight, bias, and activation data is always sent from the backend — the cap is purely a rendering decision.
Why large layers are a concern
Rendering connections in the browser is GPU/CPU intensive. A Linear(500, 500) layer creates 250,000 curved lines and animated particles simultaneously, which can crash the browser tab.
Troubleshooting
Always restart the Jupyter Kernel when you re-run
viz = Visualizer(...).
| Problem | Cause | Fix |
|---|---|---|
[Errno 10048] Address in use |
Visualizer() was called twice on the same port |
Restart kernel, or use Visualizer(model, port=8001) |
Values frozen at 0.00 |
Used model.forward(inputs) instead of model(inputs) |
Always call outputs = model(inputs) — this triggers PyTorch hooks |
| Browser lag / FPS drops | Model has very large layers | Lower the neuron cap slider or choose "Cap at 32" |
| Accuracy not showing | Accuracy not passed to viz.step() |
Add accuracy=acc_value to your viz.step() call |
| No orange dropout neurons | Model is in eval mode | Call model.train() before the training loop |
| Gradient bars not showing | viz.step() called before loss.backward() |
Call viz.step() after loss.backward() and optimizer.step() |
| Optimizer panel not appearing | Optimizer not passed to Visualizer |
Use Visualizer(model, optimizer=optimizer) |
AssertionError in asyncio (Windows) |
Windows ProactorEventLoop bug | Already handled internally — the error is harmless and training continues |
| Dialog appeared but neurons still capped | Old tracker.py cached in Jupyter memory |
Restart the Jupyter Kernel so the new code is loaded |
Author
Built by Ankit Kumar Singh
GitHub: ankit3890
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file nn_live-0.1.7.tar.gz.
File metadata
- Download URL: nn_live-0.1.7.tar.gz
- Upload date:
- Size: 34.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
df8bd2031a4c7e9fae0742b18166662aed7d08f8711c83484c7a754a23ee36b3
|
|
| MD5 |
10a800fd3038a3dfa16c3ec52ad85fef
|
|
| BLAKE2b-256 |
6286ea21b34c461af71f51795522f0ee3dbf0c749da81d900948feb6cb054f04
|
Provenance
The following attestation bundles were made for nn_live-0.1.7.tar.gz:
Publisher:
publish.yml on ankit3890/PyTorch_Neuron_Visualization_tool
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nn_live-0.1.7.tar.gz -
Subject digest:
df8bd2031a4c7e9fae0742b18166662aed7d08f8711c83484c7a754a23ee36b3 - Sigstore transparency entry: 1435925748
- Sigstore integration time:
-
Permalink:
ankit3890/PyTorch_Neuron_Visualization_tool@382051cde968bfd212f3d2603b8c8fadcecaedcf -
Branch / Tag:
refs/heads/main - Owner: https://github.com/ankit3890
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@382051cde968bfd212f3d2603b8c8fadcecaedcf -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file nn_live-0.1.7-py3-none-any.whl.
File metadata
- Download URL: nn_live-0.1.7-py3-none-any.whl
- Upload date:
- Size: 28.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8f53da9e9753a01c21219048cb17b6b19e09852a2ea041754be6738ef217229f
|
|
| MD5 |
e3a93321f9592eeca65670b194c62e8e
|
|
| BLAKE2b-256 |
639bb44a4963876970c97743eb4b79b30fc0813f0185d455fe166dac864bcba5
|
Provenance
The following attestation bundles were made for nn_live-0.1.7-py3-none-any.whl:
Publisher:
publish.yml on ankit3890/PyTorch_Neuron_Visualization_tool
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nn_live-0.1.7-py3-none-any.whl -
Subject digest:
8f53da9e9753a01c21219048cb17b6b19e09852a2ea041754be6738ef217229f - Sigstore transparency entry: 1435925756
- Sigstore integration time:
-
Permalink:
ankit3890/PyTorch_Neuron_Visualization_tool@382051cde968bfd212f3d2603b8c8fadcecaedcf -
Branch / Tag:
refs/heads/main - Owner: https://github.com/ankit3890
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@382051cde968bfd212f3d2603b8c8fadcecaedcf -
Trigger Event:
workflow_dispatch
-
Statement type: