Skip to main content

A domain-specific language and debugger for neural networks

Project description

N

⚠️ WARNING: Neural-dsl is a WIP DSL and debugger—bugs exist, feedback welcome! This project is under active development and not yet production-ready!

Neural: A Neural Network Programming Language

License: MIT Python 3.8+ Discord Pylint Python package CodeQL Advanced Tests Coverage Dev.to blog Medium PyPi

Neural - DSL for defining, training, debugging neural networks. | Product Hunt

design-01jmphv5f1-1740433387

Pain Points Solved

Neural addresses deep learning challenges across Criticality (how essential) and Impact Scope (how transformative):

Criticality / Impact Low Impact Medium Impact High Impact
High - Shape Mismatches: Pre-runtime validation stops runtime errors.
- Debugging Complexity: Real-time tracing & anomaly detection.
Medium - Steep Learning Curve: No-code GUI eases onboarding. - Framework Switching: One-flag backend swaps.
- HPO Inconsistency: Unified tuning across frameworks.
Low - Boilerplate: Clean DSL syntax saves time. - Model Insight: FLOPs & diagrams.
- Config Fragmentation: Centralized setup.

Why It Matters

  • Core Value: Fix critical blockers like shape errors and debugging woes with game-changing tools.
  • Strategic Edge: Streamline framework switches and HPO for big wins.
  • User-Friendly: Lower barriers and enhance workflows with practical features.

Neural is a domain-specific language (DSL) designed for defining, training, debugging, and deploying neural networks whether via code, CLI, or a no-code interface. With declarative syntax, cross-framework support, and built-in execution tracing (NeuralDbg), it simplifies deep learning development.

Feedback

Help us improve Neural DSL! Share your feedback: Typeform link.

Features

  • YAML-like Syntax: Define models intuitively without framework boilerplate.
  • Shape Propagation: Catch dimension mismatches before runtime.
    • ✅ Interactive shape flow diagrams included.
  • Multi-Framework HPO: Optimize hyperparameters for both PyTorch and TensorFlow with a single DSL config (#434).
  • Multi-Backend Export: Generate code for TensorFlow, PyTorch, or ONNX.
  • Training Orchestration: Configure optimizers, schedulers, and metrics in one place.
  • Visual Debugging: Render interactive 3D architecture diagrams.
  • Extensible: Add custom layers/losses via Python plugins.
  • NeuralDbg: Built-in Neural Network Debugger and Visualizer.
  • No-Code Interface: Quick Prototyping for researchers and ean ducational, accessible tool for beginners.

NeuralDbg: Built-in Neural Network Debugger

NeuralDbg provides real-time execution tracing, profiling, and debugging, allowing you to visualize and analyze deep learning models in action.

Real-Time Execution Monitoring – Track activations, gradients, memory usage, and FLOPs. test_trace_graph test_flops_memory_chart test_trace_graph_stacked test_trace_graph_heatmap test_anomaly_chart test_dead_neurons test_gradient_chart

Shape Propagation Debugging – Visualize tensor transformations at each layer. ✅ Gradient Flow Analysis – Detect vanishing & exploding gradients. ✅ Dead Neuron Detection – Identify inactive neurons in deep networks. ✅ Anomaly Detection – Spot NaNs, extreme activations, and weight explosions. ✅ Step Debugging Mode – Pause execution and inspect tensors manually.

Installation

Clone the repository

git clone https://github.com/yourusername/neural.git cd neural

Create a virtual environment (recommended)

python -m venv venv source venv/bin/activate # Linux/macOS venv\Scripts\activate # Windows

Install dependencies

pip install -r requirements.txt
pip install neural-dsl

see v0.2.5 for latest HPO optimizer fixes and improvements

Prerequisites: Python 3.8+, pip

Quick Start

1. Define a Model

Create mnist.neural:

network MNISTClassifier {
  input: (28, 28, 1)  # Channels-last format
  layers:
    Conv2D(filters=32, kernel_size=(3,3), activation="relu")
    MaxPooling2D(pool_size=(2,2))
    Flatten()
    Dense(units=128, activation="relu")
    Dropout(rate=0.5)
    Output(units=10, activation="softmax")

  loss: "sparse_categorical_crossentropy"
  optimizer: Adam(learning_rate=0.001)
  metrics: ["accuracy"]

  train {
    epochs: 15
    batch_size: 64
    validation_split: 0.2
  }
}

3. Run Or Compile The Model

neural run mnist.neural --backend tensorflow --output mnist_tf.py
# Or for PyTorch:
neural run mnist.neural --backend pytorch --output mnist_torch.py

4. Visualize Architecture

neural visualize mnist.neural --format png

This will create architecture.png, shape_propagation.html, and tensor_flow.html for inspecting the network structure and shape propagation.

MNIST Architecture

5. Debug with NeuralDbg

neural debug mnist.neural

Open your browser to http://localhost:8050 to monitor execution traces, gradients, and anomalies interactively.

6. Use The No-Code Interface

neural --no_code

Open your browser to http://localhost:8051 to build and compile models via a graphical interface.


🛠 Debugging with NeuralDbg

🔹 1️⃣ Start Real-Time Execution Tracing

python neural.py debug mnist.neural

Features: ✅ Layer-wise execution trace ✅ Memory & FLOP profiling ✅ Live performance monitoring

🔹 2️⃣ Analyze Gradient Flow

python neural.py debug --gradients mnist.neural

Detect vanishing/exploding gradients with interactive charts.

🔹 3️⃣ Identify Dead Neurons

python neural.py debug --dead-neurons mnist.neural

🛠 Find layers with inactive neurons (common in ReLU networks).

🔹 4️⃣ Detect Training Anomalies

python neural.py debug --anomalies mnist.neural

Flag NaNs, weight explosions, and extreme activations.

🔹 5️⃣ Step Debugging (Interactive Tensor Inspection)

python neural.py debug --step mnist.neural

🔍 Pause execution at any layer and inspect tensors manually.


Why Neural?

Feature Neural Raw TensorFlow/PyTorch
Shape Validation ✅ Auto ❌ Manual
Framework Switching 1-line flag Days of rewriting
Architecture Diagrams Built-in Third-party tools
Training Config Unified Fragmented configs

🔄 Cross-Framework Code Generation

Neural DSL TensorFlow Output PyTorch Output
Conv2D(filters=32) tf.keras.layers.Conv2D(32) nn.Conv2d(in_channels, 32)
Dense(units=128) tf.keras.layers.Dense(128) nn.Linear(in_features, 128)

Benchmarks

Task Neural Baseline (TF/PyTorch)
MNIST Training 1.2x ⚡ 1.0x
Debugging Setup 5min 🕒 2hr+

Documentation

Explore advanced features:

Examples

Explore common use cases in examples/ with step-by-step guides in docs/examples/:

🕸Architecture Graphs (Zoom A Lot For Some)

classes packages


Contributing

We welcome contributions! See our:

To set up a development environment:

git clone https://github.com/yourusername/neural.git
cd neural
pip install -r requirements-dev.txt  # Includes linter, formatter, etc.
pre-commit install  # Auto-format code on commit

Star History

Star History Chart

Support

Please give us a star ⭐️ to increase our chances of getting into GitHub trends - the more attention we get, the higher our chances of actually making a difference. Please share this project with your friends! Every share helps us reach more developers and grow our community. The more developers we reach, the more likely we are to build something truly revolutionary together.

Community

N (1)

Note: See v0.2.5 release notes for latest fixes and improvements!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural_dsl-0.2.5.tar.gz (81.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neural_dsl-0.2.5-py3-none-any.whl (74.9 kB view details)

Uploaded Python 3

File details

Details for the file neural_dsl-0.2.5.tar.gz.

File metadata

  • Download URL: neural_dsl-0.2.5.tar.gz
  • Upload date:
  • Size: 81.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for neural_dsl-0.2.5.tar.gz
Algorithm Hash digest
SHA256 9b46d968c94be2d3040ed4f6cfdec4bc5b480794983e8ab3d9aa7bba958adb30
MD5 09fc29c17fd4291970cf86c2af1facd6
BLAKE2b-256 2706fdb9397597b85159ca515eb91e9461338f0e7940d68ee0f083a3a18391e4

See more details on using hashes here.

File details

Details for the file neural_dsl-0.2.5-py3-none-any.whl.

File metadata

  • Download URL: neural_dsl-0.2.5-py3-none-any.whl
  • Upload date:
  • Size: 74.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for neural_dsl-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 78f597b4bb25157e3fcd1fa469c73fd268644842c8c3bc8ceb8fb0108e609b1a
MD5 9095218df4f95528f5ba65ec9a1c0421
BLAKE2b-256 539592c6b9ff91fd32f287bec9ce7e2f222adc99a24fbf7b44eee111bc85d812

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page