Skip to main content

Integrate PyTorch Model in CasADi graphs.

Project description

PyPI version L4CasADi CI Downloads


Learning 4 CasADi Framework

L4CasADi enables the seamless integration of PyTorch-learned models with CasADi for efficient and potentially hardware-accelerated numerical optimization. The only requirement on the PyTorch model is to be traceable and differentiable.

Collision-free minimum snap optimized trajectory through a NeRF Energy Efficient Fish Navigation in Turbulent Flow
Open In Colab                                                Open In Colab                      

Two L4CasADi examples: Collision-free trajectory through a NeRF and Navigation in Turbulent Flow

arXiv: Learning for CasADi: Data-driven Models in Numerical Optimization

Talk: Youtube

L4CasADi v2 Breaking Changes

After feedback from first use-cases L4CasADi v2 is designed with efficiency and simplicity in mind.

This leads to the following breaking changes:

  • L4CasADi v2 can leverage PyTorch's batching capabilities for increased efficiency. When passing batched=True, L4CasADi will understand the first input dimension as batch dimension. Thus, first and second-order derivatives across elements of this dimension are assumed to be sparse-zero. To make use of this, instead of having multiple calls to a L4CasADi function in your CasADi program, batch all inputs together and have a single L4CasADi call. An example of this can be seen when comparing the non-batched NeRF example with the batched NeRF example which is faster by a factor of 5-10x.
  • L4CasADi v2 will not change the shape of an input anymore as this was a source of confusion. The tensor forwarded to the PyTorch model will resemble the exact dimension of the input variable by CasADi. You are responsible to make sure that the PyTorch model handles a two-dimensional input matrix! Accordingly, the parameter model_expects_batch_dim is removed.
  • By default, L4CasADi v2 will not provide the Hessian, but the Jacobian of the Adjoint. This is sufficient for most many optimization problems. However, you can explicitly request the generation of the Hessian by passing generate_jac_jac=True.

Table of Content

If you use this framework please cite the following two paper

@article{salzmann2023neural,
  title={Real-time Neural-MPC: Deep Learning Model Predictive Control for Quadrotors and Agile Robotic Platforms},
  author={Salzmann, Tim and Kaufmann, Elia and Arrizabalaga, Jon and Pavone, Marco and Scaramuzza, Davide and Ryll, Markus},
  journal={IEEE Robotics and Automation Letters},
  doi={10.1109/LRA.2023.3246839},
  year={2023}
}
@inproceedings{{salzmann2024l4casadi,
  title={Learning for CasADi: Data-driven Models in Numerical Optimization},
  author={Salzmann, Tim and Arrizabalaga, Jon and Andersson, Joel and Pavone, Marco and Ryll, Markus},
  booktitle={Learning for Dynamics and Control Conference (L4DC)},
  year={2024}
}

Projects using L4CasADi

  • Real-time Neural-MPC: Deep Learning Model Predictive Control for Quadrotors and Agile Robotic Platforms
    Paper | Code
  • AC4MPC: Actor-Critic Reinforcement Learning for Nonlinear Model Predictive Control
    Paper
  • Reinforcement Learning based MPC with Neural Dynamical Models
    Paper
  • Neural Potential Field for Obstacle-Aware Local Motion Planning
    Paper | Video | Code
  • N-MPC for Deep Neural Network-Based Collision Avoidance exploiting Depth Images
    Paper | Code
  • An Integrated Framework for Autonomous Driving Planning and Tracking based on NNMPC Considering Road Surface Variations
    Paper

If your project is using L4CasADi and you would like to be featured here, please reach out.


Installation

Prerequisites

Independently if you install from source or via pip you will need to meet the following requirements:

  • Working build system: CMake compatible C++ compiler (GCC version 10 or higher).
  • PyTorch (>=2.0) installation in your python environment.
    python -c "import torch; print(torch.__version__)"

Pip Install (CPU Only)

  • Ensure Torch CPU-version is installed
    pip install torch>=2.0 --index-url https://download.pytorch.org/whl/cpu
  • Ensure all build dependencies are installed
setuptools>=68.1
scikit-build>=0.17
cmake>=3.27
ninja>=1.11
  • Run
    pip install l4casadi --no-build-isolation

From Source (CPU Only)

  • Clone the repository
    git clone https://github.com/Tim-Salzmann/l4casadi.git

  • All build dependencies installed via
    pip install -r requirements_build.txt

  • Build from source
    pip install . --no-build-isolation

The --no-build-isolation flag is required for L4CasADi to find and link against the installed PyTorch.

GPU (CUDA)

CUDA installation requires nvcc to be installed which is part of the CUDA toolkit and can be installed on Linux via sudo apt-get -y install cuda-toolkit-XX-X (where XX-X is your installed Cuda version - e.g. 12-3). Once the CUDA toolkit is installed nvcc is commonly found at /usr/local/cuda/bin/nvcc.

Make sure nvcc -V can be executed and run pip install l4casadi --no-build-isolation or CUDACXX=<PATH_TO_NVCC> pip install . --no-build-isolation to build from source.

If nvcc is not automatically part of your path you can specify the nvcc path for L4CasADi. E.g. CUDACXX=<PATH_TO_NVCC> pip install l4casadi --no-build-isolation.


Quick Start

Defining an L4CasADi model in Python given a pre-defined PyTorch model is as easy as

import l4casadi as l4c

l4c_model = l4c.L4CasADi(pyTorch_model, device='cpu')

where the architecture of the PyTorch model is unrestricted and large models can be accelerated with dedicated hardware.


Online Learning and Updating

L4CasADi supports updating the PyTorch model online in the CasADi graph. To use this feature, pass mutable=True when initializing a L4CasADi. To update the model, call the update function on the L4CasADi object. You can optionally pass an updated model as parameter. If no model is passed, the reference passed at initialization is assumed to be updated and will be used for the update.


Naive L4CasADi

While L4CasADi was designed with efficiency in mind by internally leveraging torch's C++ interface, this can still result in overhead, which can be disproportionate for small, simple models. Thus, L4CasADi additionally provides a NaiveL4CasADiModule which directly recreates the PyTorch computational graph using CasADi operations and copies the weights --- leading to a pure C computational graph without context switches to torch. However, this approach is limited to a small predefined subset of PyTorch operations --- only MultiLayerPerceptron models and CPU inference are supported.

The torch framework overhead dominates for networks smaller than three hidden layers, each with 64 neurons (or equivalent). For models smaller than this size we recommend using the NaiveL4CasADiModule. For larger models, the overhead becomes negligible and L4CasADi should be used.

https://github.com/Tim-Salzmann/l4casadi/blob/f7b16fba90f4d3ee53217b560f26b47e6b23e44a/examples/naive/readme.py#L5-L9


Real-time L4CasADi

Real-time L4Casadi (former Approximated approach in ML-CasADi) is the underlying framework powering Real-time Neural-MPC. It replaces complex models with local Taylor approximations. For certain optimization procedures (such as MPC with multiple shooting nodes) this can lead to improved optimization times. However, Real-time L4Casadi, comes with many restrictions (only Python, no C(++) code generation, ...) and is therefore not a one-to-one replacement for L4Casadi. Rather it is a complementary framework for certain special use cases.

More information here.

https://github.com/Tim-Salzmann/l4casadi/blob/f7b16fba90f4d3ee53217b560f26b47e6b23e44a/l4casadi/realtime/examples/readme.py#L32-L43


Examples

https://github.com/Tim-Salzmann/l4casadi/blob/f7b16fba90f4d3ee53217b560f26b47e6b23e44a/examples/readme.py#L28-L40

Please note that only casadi.MX symbolic variables are supported as input.

Multi-input multi-output functions can be realized by concatenating the symbolic inputs when passing to the model and splitting them inside the PyTorch function.

To use GPU (CUDA) simply pass device="cuda" to the L4CasADi constructor.

Further examples:


Acados Integration

To use this framework with Acados:

An example of how a PyTorch model can be used as dynamics model in the Acados framework for Model Predictive Control can be found in examples/acados.py

To use L4CasADi with Acados you will have to set model_external_shared_lib_dir and model_external_shared_lib_name in the AcadosOcp.solver_options accordingly.

ocp.solver_options.model_external_shared_lib_dir = l4c_model.shared_lib_dir
ocp.solver_options.model_external_shared_lib_name = l4c_model.name

https://github.com/Tim-Salzmann/l4casadi/blob/f7b16fba90f4d3ee53217b560f26b47e6b23e44a/examples/acados.py#L156-L160


FYIs

Warm Up

Note that PyTorch builds the graph on first execution. Thus, the first call(s) to the CasADi function will be slow. You can warm up to the execution graph by calling the generated CasADi function one or multiple times before using it.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

l4casadi-2.0.0.tar.gz (27.2 MB view details)

Uploaded Source

File details

Details for the file l4casadi-2.0.0.tar.gz.

File metadata

  • Download URL: l4casadi-2.0.0.tar.gz
  • Upload date:
  • Size: 27.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for l4casadi-2.0.0.tar.gz
Algorithm Hash digest
SHA256 42c94170edd1462479828ae8a6ecec96030bf69ad2750eb2599f25539cfea00f
MD5 f119c535bfdea3f8489868f8a06eeb88
BLAKE2b-256 31530aa1e7434bc734bd6a6f8bc690aeaa706cf12c2d91304fa9b113072cc6a0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page