Skip to main content

Neural network architecture visualization & export toolkit, powered by NN-SVG.

Project description

NeuroSchemaX

Neural network architecture visualization and export, powered by NN-SVG.

NeuroSchemaX parses neural-network models (ONNX, PyTorch, TensorFlow, or hand-written JSON/YAML specs), normalises them into a semantic representation, and renders them using the NN-SVG JavaScript engine — producing standalone HTML and SVG diagrams suitable for papers, theses, READMEs, and documentation.

Best-supported targets: MLP/FCNN and sequential CNN-style architectures (LeNet-style, AlexNet/VGG-style). ResNet, U-Net, Transformer, and other complex graph structures are rendered as honest approximate summaries; exact topology cannot be drawn for architectures that fall outside the three sequential NN-SVG families.


What it does

  1. Parse — reads ONNX, PyTorch, Keras, JSON, or YAML and understands the layer structure.
  2. Analyse — detects layer types, skip connections, and block groupings, then recommends the best diagram style.
  3. Render — produces standalone offline HTML or SVG (via headless Chromium), plus JSON export formats.

Installation

Install from PyPI:

pip install neuroschemax

Install from GitHub:

pip install git+https://github.com/arashsajjadi/NeuroSchemaX.git

Optional extras:

pip install "neuroschemax[onnx]"   # ONNX input (already in base install)
pip install "neuroschemax[torch]"  # PyTorch model input
pip install "neuroschemax[tf]"     # TensorFlow / Keras model input
pip install "neuroschemax[svg]"    # SVG export via headless Chromium
playwright install chromium        # also required for SVG export

pip install "neuroschemax[colab]"  # Colab / Jupyter inline display (IPython)
pip install "neuroschemax[dev]"    # tests and linter
pip install "neuroschemax[all]"    # everything except Playwright/Chromium

PyTorch → ONNX (recommended for real PyTorch models):

pip install "neuroschemax[torch]" onnxscript

onnxscript is required by torch.onnx.export in torch >= 2.x.

Google Colab:

!pip install "neuroschemax[colab]"

Save and download HTML for full interactive rendering — Colab's inline rendering is limited for complex JS diagrams.


Quickstart

import neuroschemax as nsx

nsx.draw("model.onnx")
nsx.savefig("architecture.html")
neuroschemax draw model.onnx

Open the generated HTML in any browser — no internet connection required.


Python API

Simplified stateful API

import neuroschemax as nsx

nsx.draw("model.onnx")        # parse and stash
nsx.savefig("diagram.html")   # use stashed arch — format inferred from extension
nsx.save_html("out.html")     # also HTML
nsx.show()                    # open in browser (inline in Jupyter)

# Or pass source directly
nsx.save_html("out.html", "model.onnx")

Figure object API

fig = nsx.figure(width=1400, height=700, theme="paper")
fig.draw("model.onnx")
fig.savefig("diagram.html")
fig.save_html("diagram.html")
fig.save_svg("diagram.svg")       # needs Playwright
fig.show()
fig.export_debug_json("debug.json")
fig.export_paper_json("paper.json")
fig.export_nnsvg_json("spec.json")

# Matplotlib-style sizing
fig = nsx.figure(figsize=(12, 6), dpi=120, theme="paper")

# Chaining
nsx.figure().draw("model.onnx").save_html("out.html")

Explicit functional API

nsx.parse_model(source)               # SemanticArchitecture
nsx.summarize_model(source)           # str
nsx.recommend_view(source)            # dict: family/confidence/is_approximate/reason/warnings

nsx.build_nnsvg_spec(source, **kw)    # NNSVGSpec
nsx.render_network_html(source)       # HTML string
nsx.render_network_svg(source)        # SVG string (needs Playwright)
nsx.save_network_html(path, source)   # Path
nsx.save_network_svg(path, source)    # Path (needs Playwright)

nsx.export_paper_json(source)         # JSON string
nsx.export_debug_json(source)         # JSON string
nsx.save_paper_json(path, source)     # Path
nsx.save_debug_json(path, source)     # Path
nsx.save_nnsvg_spec(path, spec)       # Path

nsx.doctor()                          # dict with status/version/assets/deps/messages

Rendering keyword arguments

These are accepted by every rendering function and nsx.figure():

Argument Type Default Description
theme str "paper" "paper", "thesis", "debug", "readme"
style str auto Force "fcnn", "lenet", or "alexnet"
figsize (float, float) Matplotlib-style: width = round(w * dpi), height = round(h * dpi)
dpi float 100 Pixels per inch used with figsize
width int 1200 Canvas width in pixels (overrides figsize)
height int 700 Canvas height in pixels (overrides figsize)
title str model name Diagram title shown above the diagram
show_labels bool True Show layer labels
show_shapes bool True Show shape dimensions in labels
compact bool False Use compact layout (tighter per-layer budget)
label_mode str "auto" "auto", "name", "compact", "shape", "full"
detail_level str "auto" "auto", "summary", "full"
show_activations bool True Fuse activation names into preceding layer labels
transformer_mode str "block_summary" "block_summary" or "unsupported"
approximate_mode str "warn" "warn", "error", or "allow"

label_mode:

  • auto — compact labels for small models; name-only for large models
  • name — layer name only (shortest; never overlaps)
  • compact — operation type + key parameters: Conv 64 k3, MaxP k2 s2, Dense 128, GAP
  • shape — shape only, no name
  • full — layer name + complete shape string; may be wide for large models

Activations following a layer are fused into the label when show_activations=True: Conv 64 k3 +ReLU, Dense 128 +GeLU. BatchNorm, LayerNorm, and Dropout appear as inline badges: +BN, +LN, +Drop 0.5.

detail_level:

  • auto — all layers for small models; grouped blocks for models with more than 12 spec layers
  • summary — groups conv/pool sequences into named blocks with metadata, e.g. Block 2 / 4×Conv k3, 128ch / Pool ↓2
  • full — every individual layer shown; intended for inspection and debugging, not screenshots

transformer_mode:

  • block_summary — approximate conceptual block sequence: Tokens/Input → Embedding → [MH-Attn] / Add & Norm → [FFN] / Add & Norm → [Head]. This is not exact Transformer rendering. Q/K/V projections, individual heads, exact residual paths, and tensor flow are not drawn.
  • unsupported — renders a structured HTML diagnostic card instead of a diagram, listing detected components and suggesting block_summary or debug JSON

approximate_mode:

  • warn — amber warning badge shown in HTML (default)
  • error — raises RenderError before rendering an approximate diagram
  • allow — suppresses warning badges

CLI

# Draw (output defaults to <model>.html)
neuroschemax draw model.onnx
neuroschemax draw model.onnx -o diagram.html --theme paper

# Render with full options
neuroschemax render model.onnx -o diagram.html --theme thesis --width 1600
neuroschemax render model.onnx -o diagram.svg

# Inspect and summarise
neuroschemax inspect        model.onnx
neuroschemax summarize      model.onnx
neuroschemax summarize      model.onnx --format markdown
neuroschemax recommend-view model.onnx

# JSON exports
neuroschemax export-paper-json model.onnx -o model.paper.json
neuroschemax export-debug-json model.onnx -o model.debug.json
neuroschemax export-nnsvg      model.onnx -o model.nnsvg.json

# Environment diagnostics
neuroschemax doctor

Supported inputs

Source Notes
.onnx file Standard ONNX format
.json file Manual spec JSON
.yaml / .yml file Manual spec YAML
Python dict Manual spec as a Python dict
torch.nn.Module Requires pip install torch
tf.keras.Model Requires pip install tensorflow
onnx.ModelProto Pre-loaded ONNX object

Supported outputs

Format API CLI
Standalone HTML save_network_html render -o .html
SVG (via Playwright) save_network_svg render -o .svg
Paper JSON save_paper_json export-paper-json
Debug JSON save_debug_json export-debug-json
NN-SVG JSON spec save_nnsvg_spec export-nnsvg
Text summary summarize_model summarize
Markdown summary summarize_model(..., "markdown") summarize --format markdown

Diagram families and fidelity

NN-SVG supports three sequential diagram families. NeuroSchemaX selects the best fit automatically based on the model structure.

Architecture Rendered as Fidelity What is preserved
MLP / dense network FCNN neuron columns Exact
Small CNN (≤ 3 convs) LeNet feature maps Exact
VGG-style deep CNN AlexNet feature maps Exact
ResNet / residual blocks Block summary (skip collapsed) Approximate Skip links in debug JSON
U-Net / encoder-decoder Block summary (concat collapsed) Approximate Decoder branches in debug JSON
Transformer / attention Conceptual block sequence Approximate All layers in debug JSON
LSTM / GRU / RNN Block sequence Approximate All layers in debug JSON
Arbitrary DAG Not supported Full graph in debug JSON

NeuroSchemaX does not claim to render arbitrary DAGs, exact ResNet skip edges, exact U-Net encoder-decoder skip topology, or exact Transformer attention flow. Models that fall outside the three sequential families are shown as honest approximate summaries with clear on-diagram markers (+skip collapsed, concat collapsed) and amber warning badges in the HTML.

ResNet summary: Stem → Residual Block N (n×Conv k3, Cch, +skip collapsed) → Head / Classifier

U-Net summary: Encoder → Bottleneck → Decoder (concat collapsed) → Segmentation Head

Transformer block summary: Tokens/Input → Embedding → [MH-Attn] / Add & Norm → [FFN] / Add & Norm → [Head]

The complete layer graph (every Add/Concat/Upsample with attributes) is preserved in export-debug-json regardless of what is shown visually.


How architecture recommendation works

info = nsx.recommend_view("model.onnx")
# {
#   "family": "alexnet",
#   "confidence": "high",
#   "is_approximate": False,
#   "reason": "Deep CNN with 8 convolutional layers, mapped to AlexNet",
#   "warnings": [],
#   "complexity_hint": "sequential"
# }

Selection rules (in priority order):

  • Attention layers → block-level LeNet view (LOW confidence, is_approximate=True)
  • Recurrent layers (LSTM/GRU) → block-level LeNet view (LOW confidence)
  • No convolutions, at least one dense layer → FCNN (HIGH confidence)
  • 1–3 conv layers → LeNet (HIGH; MEDIUM with skip/merge warning)
  • 4+ conv layers → AlexNet (HIGH; MEDIUM if skip connections present)

Troubleshooting

Run neuroschemax doctor first.

SVG export fails:

pip install playwright
playwright install chromium

Missing assets:

pip install --force-reinstall neuroschemax

HTML is blank: Open in a modern browser (Chrome/Firefox/Edge). Check the browser console for JS errors. Try --theme debug for extra output.

See docs/troubleshooting.md for more.


FAQ

Q: Does it work offline? Yes. Generated HTML files are fully self-contained — all JS is embedded.

Q: Can I use it in a Jupyter notebook? Yes. If fig is the last expression in a cell, _repr_html_() renders the diagram inline automatically. fig.to_html() returns the HTML string for manual display() calls. nsx.show() / fig.show() open in the browser outside notebooks.

Q: Can I use it in Google Colab? Yes, with limitations. Install with !pip install "neuroschemax[colab]". Inline rendering in Colab is limited — use fig.save_html("diagram.html") and download the file for full interactivity.

Q: My PyTorch ONNX export fails with No module named 'onnxscript'. torch >= 2.x requires onnxscript for torch.onnx.export. Install it: pip install onnxscript.

Q: Diagram shapes show ?. Shape propagation is best-effort. Common causes: dynamic batch axes (use a concrete size at export), or BatchNorm/Dropout removed by ONNX exporter in eval mode (expected — they are folded out of the computation graph).

Q: Does it support ONNX opset X? It reads graph topology and layer attributes regardless of opset version. Unsupported op types are normalised to LayerKind.UNKNOWN.

Q: Can I customise the diagram further? Override any rendering option via kwargs. For pixel-level control, export the NN-SVG JSON spec and load it in the NN-SVG web app.

Q: Why is confidence MEDIUM/LOW? The model has features that don't map perfectly to the chosen NN-SVG family. The diagram is still generated; read the warnings for specifics.

Q: Why does the diagram show fewer layers than my model has? Large models are grouped into conv/pool blocks by default (detail_level="auto"). Pass detail_level="full" to see every individual layer. Note that full mode is intended for inspection; for screenshots, detail_level="summary" or the default produces more readable output.


Development

git clone https://github.com/arashsajjadi/NeuroSchemaX.git
cd NeuroSchemaX
pip install -e ".[dev]"
pytest tests/ -v
ruff check src/ tests/

See CONTRIBUTING.md.


Author

Arash Sajjadi — maintainer and author.

GitHub: arashsajjadi/NeuroSchemaX


License

MIT — see LICENSE.

NN-SVG is by Alex Lenail and also MIT-licensed. NeuroSchemaX integrates it as an embedded rendering engine.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neuroschemax-0.1.1.tar.gz (97.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neuroschemax-0.1.1-py3-none-any.whl (90.2 kB view details)

Uploaded Python 3

File details

Details for the file neuroschemax-0.1.1.tar.gz.

File metadata

  • Download URL: neuroschemax-0.1.1.tar.gz
  • Upload date:
  • Size: 97.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for neuroschemax-0.1.1.tar.gz
Algorithm Hash digest
SHA256 846e1081a3d8aa9001c3441751830cd6c0834ca15ea38c54aa96eefe9e7d3253
MD5 69e91332e34f4efd92766a48808689ee
BLAKE2b-256 f563586bf4b00a36415559e7e79d73b3ab0144ba00ef080f95bfbc056466f356

See more details on using hashes here.

Provenance

The following attestation bundles were made for neuroschemax-0.1.1.tar.gz:

Publisher: publish.yml on arashsajjadi/NeuroSchemaX

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neuroschemax-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: neuroschemax-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 90.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for neuroschemax-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e974fe118bc1390b8fb630c2af7bee1f33da8c07f3083e5ae7683c333e6b0e9f
MD5 242582e62355b3d2553cdd83bb07a483
BLAKE2b-256 a925e71c1ff19413cd6eab3b27a46a821652d0955ab68c10aab3330402345538

See more details on using hashes here.

Provenance

The following attestation bundles were made for neuroschemax-0.1.1-py3-none-any.whl:

Publisher: publish.yml on arashsajjadi/NeuroSchemaX

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page