Neural network architecture visualization & export toolkit, powered by NN-SVG.
Project description
NeuroSchemaX
Neural network architecture visualization and export, powered by NN-SVG.
NeuroSchemaX parses neural-network models (ONNX, PyTorch, TensorFlow, or hand-written JSON/YAML specs), normalises them into a semantic representation, and renders them using the NN-SVG JavaScript engine — producing standalone HTML and SVG diagrams suitable for papers, theses, READMEs, and documentation.
Best-supported targets: MLP/FCNN and sequential CNN-style architectures (LeNet-style, AlexNet/VGG-style). ResNet, U-Net, Transformer, and other complex graph structures are rendered as honest approximate summaries; exact topology cannot be drawn for architectures that fall outside the three sequential NN-SVG families.
What it does
- Parse — reads ONNX, PyTorch, Keras, JSON, or YAML and understands the layer structure.
- Analyse — detects layer types, skip connections, and block groupings, then recommends the best diagram style.
- Render — produces standalone offline HTML or SVG (via headless Chromium), plus JSON export formats.
Installation
Install from PyPI:
pip install neuroschemax
Install from GitHub:
pip install git+https://github.com/arashsajjadi/NeuroSchemaX.git
Optional extras:
pip install "neuroschemax[onnx]" # ONNX input (already in base install)
pip install "neuroschemax[torch]" # PyTorch model input
pip install "neuroschemax[tf]" # TensorFlow / Keras model input
pip install "neuroschemax[svg]" # SVG export via headless Chromium
playwright install chromium # also required for SVG export
pip install "neuroschemax[colab]" # Colab / Jupyter inline display (IPython)
pip install "neuroschemax[dev]" # tests and linter
pip install "neuroschemax[all]" # everything except Playwright/Chromium
PyTorch → ONNX (recommended for real PyTorch models):
pip install "neuroschemax[torch]" onnxscript
onnxscript is required by torch.onnx.export in torch >= 2.x.
Google Colab:
!pip install "neuroschemax[colab]"
Save and download HTML for full interactive rendering — Colab's inline rendering is limited for complex JS diagrams.
Quickstart
import neuroschemax as nsx
nsx.draw("model.onnx")
nsx.savefig("architecture.html")
neuroschemax draw model.onnx
Open the generated HTML in any browser — no internet connection required.
Python API
Simplified stateful API
import neuroschemax as nsx
nsx.draw("model.onnx") # parse and stash
nsx.savefig("diagram.html") # use stashed arch — format inferred from extension
nsx.save_html("out.html") # also HTML
nsx.show() # open in browser (inline in Jupyter)
# Or pass source directly
nsx.save_html("out.html", "model.onnx")
Figure object API
fig = nsx.figure(width=1400, height=700, theme="paper")
fig.draw("model.onnx")
fig.savefig("diagram.html")
fig.save_html("diagram.html")
fig.save_svg("diagram.svg") # needs Playwright
fig.show()
fig.export_debug_json("debug.json")
fig.export_paper_json("paper.json")
fig.export_nnsvg_json("spec.json")
# Matplotlib-style sizing
fig = nsx.figure(figsize=(12, 6), dpi=120, theme="paper")
# Chaining
nsx.figure().draw("model.onnx").save_html("out.html")
Explicit functional API
nsx.parse_model(source) # SemanticArchitecture
nsx.summarize_model(source) # str
nsx.recommend_view(source) # dict: family/confidence/is_approximate/reason/warnings
nsx.build_nnsvg_spec(source, **kw) # NNSVGSpec
nsx.render_network_html(source) # HTML string
nsx.render_network_svg(source) # SVG string (needs Playwright)
nsx.save_network_html(path, source) # Path
nsx.save_network_svg(path, source) # Path (needs Playwright)
nsx.export_paper_json(source) # JSON string
nsx.export_debug_json(source) # JSON string
nsx.save_paper_json(path, source) # Path
nsx.save_debug_json(path, source) # Path
nsx.save_nnsvg_spec(path, spec) # Path
nsx.doctor() # dict with status/version/assets/deps/messages
Rendering keyword arguments
These are accepted by every rendering function and nsx.figure():
| Argument | Type | Default | Description |
|---|---|---|---|
theme |
str |
"paper" |
"paper", "thesis", "debug", "readme" |
style |
str |
auto | Force "fcnn", "lenet", or "alexnet" |
figsize |
(float, float) |
— | Matplotlib-style: width = round(w * dpi), height = round(h * dpi) |
dpi |
float |
100 |
Pixels per inch used with figsize |
width |
int |
1200 |
Canvas width in pixels (overrides figsize) |
height |
int |
700 |
Canvas height in pixels (overrides figsize) |
title |
str |
model name | Diagram title shown above the diagram |
show_labels |
bool |
True |
Show layer labels |
show_shapes |
bool |
True |
Show shape dimensions in labels |
compact |
bool |
False |
Use compact layout (tighter per-layer budget) |
label_mode |
str |
"auto" |
"auto", "name", "compact", "shape", "full" |
detail_level |
str |
"auto" |
"auto", "summary", "full" |
show_activations |
bool |
True |
Fuse activation names into preceding layer labels |
transformer_mode |
str |
"block_summary" |
"block_summary" or "unsupported" |
approximate_mode |
str |
"warn" |
"warn", "error", or "allow" |
label_mode:
auto— compact labels for small models; name-only for large modelsname— layer name only (shortest; never overlaps)compact— operation type + key parameters:Conv 64 k3,MaxP k2 s2,Dense 128,GAPshape— shape only, no namefull— layer name + complete shape string; may be wide for large models
Activations following a layer are fused into the label when show_activations=True:
Conv 64 k3 +ReLU, Dense 128 +GeLU. BatchNorm, LayerNorm, and Dropout appear as
inline badges: +BN, +LN, +Drop 0.5.
detail_level:
auto— all layers for small models; grouped blocks for models with more than 12 spec layerssummary— groups conv/pool sequences into named blocks with metadata, e.g.Block 2 / 4×Conv k3, 128ch / Pool ↓2full— every individual layer shown; intended for inspection and debugging, not screenshots
transformer_mode:
block_summary— approximate conceptual block sequence:Tokens/Input → Embedding → [MH-Attn] / Add & Norm → [FFN] / Add & Norm → [Head]. This is not exact Transformer rendering. Q/K/V projections, individual heads, exact residual paths, and tensor flow are not drawn.unsupported— renders a structured HTML diagnostic card instead of a diagram, listing detected components and suggestingblock_summaryor debug JSON
approximate_mode:
warn— amber warning badge shown in HTML (default)error— raisesRenderErrorbefore rendering an approximate diagramallow— suppresses warning badges
CLI
# Draw (output defaults to <model>.html)
neuroschemax draw model.onnx
neuroschemax draw model.onnx -o diagram.html --theme paper
# Render with full options
neuroschemax render model.onnx -o diagram.html --theme thesis --width 1600
neuroschemax render model.onnx -o diagram.svg
# Inspect and summarise
neuroschemax inspect model.onnx
neuroschemax summarize model.onnx
neuroschemax summarize model.onnx --format markdown
neuroschemax recommend-view model.onnx
# JSON exports
neuroschemax export-paper-json model.onnx -o model.paper.json
neuroschemax export-debug-json model.onnx -o model.debug.json
neuroschemax export-nnsvg model.onnx -o model.nnsvg.json
# Environment diagnostics
neuroschemax doctor
Supported inputs
| Source | Notes |
|---|---|
.onnx file |
Standard ONNX format |
.json file |
Manual spec JSON |
.yaml / .yml file |
Manual spec YAML |
Python dict |
Manual spec as a Python dict |
torch.nn.Module |
Requires pip install torch |
tf.keras.Model |
Requires pip install tensorflow |
onnx.ModelProto |
Pre-loaded ONNX object |
Supported outputs
| Format | API | CLI |
|---|---|---|
| Standalone HTML | save_network_html |
render -o .html |
| SVG (via Playwright) | save_network_svg |
render -o .svg |
| Paper JSON | save_paper_json |
export-paper-json |
| Debug JSON | save_debug_json |
export-debug-json |
| NN-SVG JSON spec | save_nnsvg_spec |
export-nnsvg |
| Text summary | summarize_model |
summarize |
| Markdown summary | summarize_model(..., "markdown") |
summarize --format markdown |
Diagram families and fidelity
NN-SVG supports three sequential diagram families. NeuroSchemaX selects the best fit automatically based on the model structure.
| Architecture | Rendered as | Fidelity | What is preserved |
|---|---|---|---|
| MLP / dense network | FCNN neuron columns | Exact | — |
| Small CNN (≤ 3 convs) | LeNet feature maps | Exact | — |
| VGG-style deep CNN | AlexNet feature maps | Exact | — |
| ResNet / residual blocks | Block summary (skip collapsed) | Approximate | Skip links in debug JSON |
| U-Net / encoder-decoder | Block summary (concat collapsed) | Approximate | Decoder branches in debug JSON |
| Transformer / attention | Conceptual block sequence | Approximate | All layers in debug JSON |
| LSTM / GRU / RNN | Block sequence | Approximate | All layers in debug JSON |
| Arbitrary DAG | Not supported | — | Full graph in debug JSON |
NeuroSchemaX does not claim to render arbitrary DAGs, exact ResNet skip edges,
exact U-Net encoder-decoder skip topology, or exact Transformer attention flow.
Models that fall outside the three sequential families are shown as honest
approximate summaries with clear on-diagram markers (+skip collapsed,
concat collapsed) and amber warning badges in the HTML.
ResNet summary: Stem → Residual Block N (n×Conv k3, Cch, +skip collapsed) → Head / Classifier
U-Net summary: Encoder → Bottleneck → Decoder (concat collapsed) → Segmentation Head
Transformer block summary: Tokens/Input → Embedding → [MH-Attn] / Add & Norm → [FFN] / Add & Norm → [Head]
The complete layer graph (every Add/Concat/Upsample with attributes) is
preserved in export-debug-json regardless of what is shown visually.
How architecture recommendation works
info = nsx.recommend_view("model.onnx")
# {
# "family": "alexnet",
# "confidence": "high",
# "is_approximate": False,
# "reason": "Deep CNN with 8 convolutional layers, mapped to AlexNet",
# "warnings": [],
# "complexity_hint": "sequential"
# }
Selection rules (in priority order):
- Attention layers → block-level LeNet view (LOW confidence,
is_approximate=True) - Recurrent layers (LSTM/GRU) → block-level LeNet view (LOW confidence)
- No convolutions, at least one dense layer → FCNN (HIGH confidence)
- 1–3 conv layers → LeNet (HIGH; MEDIUM with skip/merge warning)
- 4+ conv layers → AlexNet (HIGH; MEDIUM if skip connections present)
Troubleshooting
Run neuroschemax doctor first.
SVG export fails:
pip install playwright
playwright install chromium
Missing assets:
pip install --force-reinstall neuroschemax
HTML is blank: Open in a modern browser (Chrome/Firefox/Edge). Check the
browser console for JS errors. Try --theme debug for extra output.
See docs/troubleshooting.md for more.
FAQ
Q: Does it work offline? Yes. Generated HTML files are fully self-contained — all JS is embedded.
Q: Can I use it in a Jupyter notebook?
Yes. If fig is the last expression in a cell, _repr_html_() renders the
diagram inline automatically. fig.to_html() returns the HTML string for
manual display() calls. nsx.show() / fig.show() open in the browser
outside notebooks.
Q: Can I use it in Google Colab?
Yes, with limitations. Install with !pip install "neuroschemax[colab]".
Inline rendering in Colab is limited — use fig.save_html("diagram.html")
and download the file for full interactivity.
Q: My PyTorch ONNX export fails with No module named 'onnxscript'.
torch >= 2.x requires onnxscript for torch.onnx.export.
Install it: pip install onnxscript.
Q: Diagram shapes show ?.
Shape propagation is best-effort. Common causes: dynamic batch axes (use a
concrete size at export), or BatchNorm/Dropout removed by ONNX exporter in
eval mode (expected — they are folded out of the computation graph).
Q: Does it support ONNX opset X?
It reads graph topology and layer attributes regardless of opset version.
Unsupported op types are normalised to LayerKind.UNKNOWN.
Q: Can I customise the diagram further? Override any rendering option via kwargs. For pixel-level control, export the NN-SVG JSON spec and load it in the NN-SVG web app.
Q: Why is confidence MEDIUM/LOW?
The model has features that don't map perfectly to the chosen NN-SVG family.
The diagram is still generated; read the warnings for specifics.
Q: Why does the diagram show fewer layers than my model has?
Large models are grouped into conv/pool blocks by default (detail_level="auto").
Pass detail_level="full" to see every individual layer. Note that full mode
is intended for inspection; for screenshots, detail_level="summary" or the
default produces more readable output.
Development
git clone https://github.com/arashsajjadi/NeuroSchemaX.git
cd NeuroSchemaX
pip install -e ".[dev]"
pytest tests/ -v
ruff check src/ tests/
See CONTRIBUTING.md.
Author
Arash Sajjadi — maintainer and author.
GitHub: arashsajjadi/NeuroSchemaX
License
MIT — see LICENSE.
NN-SVG is by Alex Lenail and also MIT-licensed. NeuroSchemaX integrates it as an embedded rendering engine.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file neuroschemax-0.1.3.tar.gz.
File metadata
- Download URL: neuroschemax-0.1.3.tar.gz
- Upload date:
- Size: 102.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dd5373916eb3a52f36a6852da6e7ca8e00ec5e61fb11b6df42e9259a40e9c8b7
|
|
| MD5 |
be6e19124d9a0bfae4ec7bf7db10e2a3
|
|
| BLAKE2b-256 |
3e6b1941b184d25b107ccc3da7a0eb8300b09bf8b378bb650879424b2988f152
|
Provenance
The following attestation bundles were made for neuroschemax-0.1.3.tar.gz:
Publisher:
publish.yml on arashsajjadi/NeuroSchemaX
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
neuroschemax-0.1.3.tar.gz -
Subject digest:
dd5373916eb3a52f36a6852da6e7ca8e00ec5e61fb11b6df42e9259a40e9c8b7 - Sigstore transparency entry: 1438942293
- Sigstore integration time:
-
Permalink:
arashsajjadi/NeuroSchemaX@d896fc07c0919ca5b187d136f0ed0a5bee2671b7 -
Branch / Tag:
refs/tags/v0.1.3 - Owner: https://github.com/arashsajjadi
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d896fc07c0919ca5b187d136f0ed0a5bee2671b7 -
Trigger Event:
release
-
Statement type:
File details
Details for the file neuroschemax-0.1.3-py3-none-any.whl.
File metadata
- Download URL: neuroschemax-0.1.3-py3-none-any.whl
- Upload date:
- Size: 92.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2d6abaf71e3724762636f427a4fa4d697dcefc3898423f32acac4dd1044d5b64
|
|
| MD5 |
c79f68415b51f22da1160522ae5082db
|
|
| BLAKE2b-256 |
7ee4f2d71fa337b9886f4248819219045ecea1da9701cd0b55085d757c183f2e
|
Provenance
The following attestation bundles were made for neuroschemax-0.1.3-py3-none-any.whl:
Publisher:
publish.yml on arashsajjadi/NeuroSchemaX
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
neuroschemax-0.1.3-py3-none-any.whl -
Subject digest:
2d6abaf71e3724762636f427a4fa4d697dcefc3898423f32acac4dd1044d5b64 - Sigstore transparency entry: 1438942314
- Sigstore integration time:
-
Permalink:
arashsajjadi/NeuroSchemaX@d896fc07c0919ca5b187d136f0ed0a5bee2671b7 -
Branch / Tag:
refs/tags/v0.1.3 - Owner: https://github.com/arashsajjadi
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d896fc07c0919ca5b187d136f0ed0a5bee2671b7 -
Trigger Event:
release
-
Statement type: