Skip to main content

Neural Modules with Adaptive Nonlinear Constraints and Efficient Regularization

Project description

NeuroMANCER v1.5.2

PyPi Version License Documentation Lightning

Neural Modules with Adaptive Nonlinear Constraints and Efficient Regularizations (NeuroMANCER) is an open-source differentiable programming (DP) library for solving parametric constrained optimization problems, physics-informed system identification, and parametric model-based optimal control. NeuroMANCER is written in PyTorch and allows for systematic integration of machine learning with scientific computing for creating end-to-end differentiable models and algorithms embedded with prior knowledge and physics.


Table of Contents

  1. Overview
  2. Key Features
  3. What's New in v1.5.2
  4. Getting Started
  5. Tutorials
  6. Installation
  7. Documentation and User Guides

Key Features

  • Learn To Model, Learn To Control, Learn To Optimize: Our library is built to provide end users a multitude of tools to solve Learning To Optimize (L2O), Learning To Model (L2M), and Learning To Control (L2C) tasks. Tackle advanced constrained parametric optimization, model fluid dynamics using physics-informed neural networks, or learn how to control indoor air temperature in buildings to maximize building efficiency.
  • Symbolic programming interface makes it very easy to define embed prior knowledge of physics, domain knowledge, and constraints into those learning paradigms.
  • Comprehensive Learning Tools: Access a wide array of tutorials and example applications—from basic system identification to advanced predictive control—making it easy for users to learn and apply NeuroMANCER to real-world problems.
  • State-of-the-art methods: NeuroMANCER is up-to-date with SOTA methods such as Kolgomorov-Arnold Networks (KANs) for function approximation, neural ordinary differential equations (NODEs) and sparse identification of non-linear dynamics (SINDy) for learning to model dynamical systems, and differentiable convex optimization layers for safety constraints in learning to optimize and learning to control.

What's New in v1.5.2

Load Forecasting Capabilities and Transformers

We expand our energy systems domain examples with load forecasting for buildings. We showcase the use of time-series modelling and forecasting using the Short-Term Electricity Load Forecasting (Panama case study) dataset. We demonstrate forecasting capaibilities using a Transformer model, a new block added to our (neural) blocks.py, as well as other standard blocks. We also utilize historical weather data to assist in energy forecasting.

Finite-Basis Kolgomorov Arnold Networks

Kolmogorov-Arnold networks (KANs) have attracted attention recently as an alternative to multilayer perceptrons (MLPs) for scientific machine learning. However, KANs can be expensive to train, even for relatively small networks. We have implemented an FBKAN block (FBPINNs), for domain decomposition method for KANs that allows for several small KANs to be trained in parallel to give accurate solutions for multiscale problems.

New Colab Examples:

Load Forecasting

Function Approximation with Kolgomorov-Arnold Networks

Getting Started

pip install neuromancer

Extensive set of tutorials can be found in the examples folder and the Tutorials below. Interactive notebook versions of examples are available on Google Colab! Test out NeuroMANCER functionality before cloning the repository and setting up an environment.

The notebooks below introduce the core abstractions of the NeuroMANCER library, in particular our symbolic programming interface and Node classes.

Symbolic Variables, Nodes, Constraints, Objectives, and Systems Classes

  • Open In Colab Part 1: Linear regression in PyTorch vs NeuroMANCER.

  • Open In Colab Part 2: NeuroMANCER syntax tutorial: variables, constraints, and objectives.

  • Open In Colab Part 3: NeuroMANCER syntax tutorial: modules, Node, and System class.

PyTorch Lightning Integration

We have integrated PyTorch Lightning to streamline code, enable custom training logic, support GPU and multi-GPU setups, and handle large-scale, memory-intensive learning tasks.

  • Open In Colab Part 1: Lightning Integration Basics.
  • Open In Colab Part 2: Lightning Advanced Features and Automatic GPU Support.
  • Open In Colab Part 4: Defining Custom Training Logic via Lightning Modularized Code.

Example

Quick example for how to solve parametric constrained optimization problem using NeuroMANCER, leveraging our symbolic programming interface, Node and Variable, Blocks, SLiM library, and PenaltyLoss classes.

# Neuromancer syntax example for constrained optimization
import neuromancer as nm
import torch 

# define neural architecture 
func = nm.modules.blocks.MLP(insize=1, outsize=2, 
                             linear_map=nm.slim.maps['linear'], 
                             nonlin=torch.nn.ReLU, hsizes=[80] * 4)
# wrap neural net into symbolic representation via the Node class: map(p) -> x
map = nm.system.Node(func, ['p'], ['x'], name='map')
    
# define decision variables
x = nm.constraint.variable("x")[:, [0]]
y = nm.constraint.variable("x")[:, [1]]
# problem parameters sampled in the dataset
p = nm.constraint.variable('p')

# define objective function
f = (1-x)**2 + (y-x**2)**2
obj = f.minimize(weight=1.0)

# define constraints
con_1 = 100.*(x >= y)
con_2 = 100.*(x**2+y**2 <= p**2)

# create penalty method-based loss function
loss = nm.loss.PenaltyLoss(objectives=[obj], constraints=[con_1, con_2])
# construct differentiable constrained optimization problem
problem = nm.problem.Problem(nodes=[map], loss=loss)

Domain Examples

NeuroMANCER is built to tackle a variety of domain-specific modeling and control problems using its array of methods. Here we show how to model and control building energy systems, as well as apply load forecasting techniques.

For more in-depth coverage of our methods, please see our general Tutorials section below.

Energy Systems

  • Open In Colab Learning Building Thermal Dynamics using Neural ODEs
  • Open In Colab Multi-zone Building Thermal Dynamics Resistance-Capacitance network with Neural ODEs
  • Open In Colab Learning Swing Equation Dynamics using Neural ODEs
  • Open In Colab Learning to Control Indoor Air Temperature in Buildings
  • Open In Colab Energy Load Forecasting for the Air Handling System of an Office Building with MLP and CNN models
  • Open In Colab Energy Load Forecasting for Building with Transformers Model
  • Open In Colab Learning to Control Pumped-storage Hyrdoelectricity System

Tutorials on Methods for Modeling, Optimization, and Control

Learning to Optimize (L2O) Parametric Programming

Neuromancer allows you to formulate and solve a broad class of parametric optimization problems leveraging machine learning to learn the solutions to such problems. More information on Parametric programming

  • Open In Colab Part 1: Learning to solve a constrained optimization problem.

  • Open In Colab Part 2: Learning to solve a quadratically-constrained optimization problem.

  • Open In Colab Part 3: Learning to solve a set of 2D constrained optimization problems.

  • Open In Colab Part 4: Learning to solve a constrained optimization problem with the projected gradient.

  • Open In Colab Part 5: Using Cvxpylayers for differentiable projection onto the polytopic feasible set.

  • Open In Colab Part 6: Learning to optimize with metric learning for Operator Splitting layers.

Learning to Control (L2C)

Neuromancer allows you to learn control policies for full spectrum of white/grey/black-box dynamical systems, subject to choice constraints and objective functions. More information on Differential Predictive Control

  • Open In Colab Part 1: Learning to stabilize a linear dynamical system.
  • Open In Colab Part 2: Learning to stabilize a nonlinear differential equation.
  • Open In Colab Part 3: Learning to control a nonlinear differential equation.
  • Open In Colab Part 4: Learning neural ODE model and control policy for an unknown dynamical system.
  • Open In Colab Part 5: Learning neural Lyapunov function for a nonlinear dynamical system.

Function Approximation

Neuromancer is up-to-date with state-of-the-art methods. Here we showcase the powerful Kolgomorov-Arnold networks More information on Kolgomorov-Arnold Networks

  • Open In Colab Part 1: A comparison of KANs and FBKANs in learning a 1D multiscale function with noise
  • Open In Colab Part 2: A comparison of KANs and FBKANs in learning a 2D multiscale function with noise

Neural Operators

Neuromancer allows one to use machine learning, prior physics and domain knowledge, to construct mathematical and differentiabl models of dynamical systems given the measured observations of the system behavior. More information on System ID via Neural State Space Models and ODEs

  • Open In Colab Part 1: Neural Ordinary Differential Equations (NODEs)
  • Open In Colab Part 2: Parameter estimation of ODE system
  • Open In Colab Part 3: Universal Differential Equations (UDEs)
  • Open In Colab Part 4: NODEs with exogenous inputs
  • Open In Colab Part 5: Neural State Space Models (NSSMs) with exogenous inputs
  • Open In Colab Part 6: Data-driven modeling of resistance-capacitance (RC) network ODEs
  • Open In Colab Part 7: Deep Koopman operator
  • Open In Colab Part 8: control-oriented Deep Koopman operator
  • Open In Colab Part 9: Sparse Identification of Nonlinear Dynamics (SINDy)

Physics-Informed Neural Networks (PINNs)

Neuromancer's symbolic programming design is perfectly suited for solving PINNs. More information on PINNs

  • Open In Colab Part 1: Diffusion Equation
  • Open In Colab Part 2: Burgers' Equation
  • Open In Colab Part 3: Burgers' Equation w/ Parameter Estimation (Inverse Problem)
  • Open In Colab Part 4: Laplace's Equation (steady-state)
  • Open In Colab Part 5: Damped Pendulum (stacked PINN)
  • Open In Colab Part 6: Navier-Stokes equation (lid-driven cavity flow, steady-state, KAN)

Stochastic Differential Equations (SDEs)

Neuromancer has been integrated with TorchSDE to handle stochastic dynamical systems. More information on SDEs

  • Open In Colab LatentSDEs: "System Identification" of Stochastic Processes using Neuromancer x TorchSDE

Documentation and User Guides

The documentation for the library can be found online. There is also an introduction video covering core features of the library.

For more information, including those for developers, please go to our Developer and User Guide

Installation

Simply run

pip install neuromancer

For manual installation please refer to Installation Instructions

Community Information

We welcome contributions and feedback from the open-source community!

Contributions, Discussions, and Issues

Please read the Community Development Guidelines for further information on contributions, discussions, and Issues.

Release notes

See the Release notes documenting new features.

License

NeuroMANCER comes with BSD license. See the license for further details.

Publications

Cite as

@article{Neuromancer2023,
  title={{NeuroMANCER: Neural Modules with Adaptive Nonlinear Constraints and Efficient Regularizations}},
  author={Drgona, Jan and Tuor, Aaron and Koch, James and Shapiro, Madelyn and Jacob, Bruno and Vrabie, Draguna},
  Url= {https://github.com/pnnl/neuromancer}, 
  year={2023}
}

Development team

Active core developers: Jan Drgona, Rahul Birmiwal, Bruno Jacob
Notable contributors: Aaron Tuor, Madelyn Shapiro, James Koch, Seth Briney, Bo Tang, Ethan King, Elliot Skomski, Zhao Chen, Christian Møldrup Legaard
Scientific advisors: Draguna Vrabie, Panos Stinis

Open-source contributions made by:

Made with contrib.rocks.

Acknowledgments

This research was partially supported by the Mathematics for Artificial Reasoning in Science (MARS) and Data Model Convergence (DMC) initiatives via the Laboratory Directed Research and Development (LDRD) investments at Pacific Northwest National Laboratory (PNNL), by the U.S. Department of Energy, through the Office of Advanced Scientific Computing Research's “Data-Driven Decision Control for Complex Systems (DnC2S)” project, and through the Energy Efficiency and Renewable Energy, Building Technologies Office under the “Dynamic decarbonization through autonomous physics-centric deep learning and optimization of building operations” and the “Advancing Market-Ready Building Energy Management by Cost-Effective Differentiable Predictive Control” projects. This project was also supported from the U.S. Department of Energy, Advanced Scientific Computing Research program, under the Uncertainty Quantification for Multifidelity Operator Learning (MOLUcQ) project (Project No. 81739). PNNL is a multi-program national laboratory operated for the U.S. Department of Energy (DOE) by Battelle Memorial Institute under Contract No. DE-AC05-76RL0-1830.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neuromancer-1.5.2.tar.gz (112.1 MB view details)

Uploaded Source

Built Distribution

neuromancer-1.5.2-py3-none-any.whl (177.0 kB view details)

Uploaded Python 3

File details

Details for the file neuromancer-1.5.2.tar.gz.

File metadata

  • Download URL: neuromancer-1.5.2.tar.gz
  • Upload date:
  • Size: 112.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.4

File hashes

Hashes for neuromancer-1.5.2.tar.gz
Algorithm Hash digest
SHA256 a8163ad92aa09ca2dc5e07baa4c1a957ac1acac1306c2d9db3d6c811b9ef99c5
MD5 8941e96b9507c697b945298e0aab119a
BLAKE2b-256 c95d5eb20f67f8527ae34fbf14eb6840bafc58a8597371a2b6da78260c7d9f6a

See more details on using hashes here.

File details

Details for the file neuromancer-1.5.2-py3-none-any.whl.

File metadata

  • Download URL: neuromancer-1.5.2-py3-none-any.whl
  • Upload date:
  • Size: 177.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.4

File hashes

Hashes for neuromancer-1.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 a6efdc0b921dce6fbc15d1095a62244186288e6384dfba15bb027cfae7d946d2
MD5 7e0e20e92fc2927dbc20997b2e76509d
BLAKE2b-256 d75298dfa2b9a197a2de146823ec5d993dab72bea0bfe6b44a4e608034a88385

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page