Skip to main content

cuOpt - GPU Optimizer (C++)

Project description

cuOpt - GPU-accelerated Optimization

Build Status Version Documentation Docker Hub Examples Open In Colab NVIDIA Launchable Videos and Tutorials

NVIDIA® cuOpt™ is a GPU-accelerated optimization engine that excels in mixed integer linear programming (MILP), linear programming (LP), quadratic programming (QP), and vehicle routing problems (VRP). It enables near real-time solutions for large-scale LPs with millions of variables and constraints, and MIPs with hundreds of thousands of variables. cuOpt offers easy integration into existing modeling languages and seamless deployment across hybrid and multi-cloud environments.

The core engine is written in C++ and wrapped with a C API, Python API and Server API.

For the latest version, ensure you are on the main branch.

Latest Documentation

cuOpt Documentation

Supported APIs

cuOpt supports the following APIs:

  • C API support
    • Linear Programming (LP)
    • Mixed Integer Linear Programming (MILP)
    • Quadratic Programming (QP)
  • C++ API support
    • cuOpt is written in C++ and includes a native C++ API. However, we do not provide documentation for the C++ API at this time. We anticipate that the C++ API will change significantly in the future. Use it at your own risk.
  • Python support
    • Routing (TSP, VRP, and PDP)
    • Linear Programming (LP), Mixed Integer Linear Programming (MILP) and Quadratic Programming (QP)
      • Algebraic modeling Python API allows users to easily build constraints and objectives
  • Server support
    • Linear Programming (LP)
    • Mixed Integer Linear Programming (MILP)
    • Routing (TSP, VRP, and PDP)

This repo is also hosted as a COIN-OR project.

Latest Release Notes:

RELEASE-NOTES.md

Installation

CUDA/GPU requirements

  • CUDA 12.0+ or CUDA 13.0+
  • NVIDIA driver >= 525.60.13 (Linux) and >= 527.41 (Windows)
  • Volta architecture or better (Compute Capability >=7.0)

Python requirements

  • Python >=3.10, <=3.13

OS requirements

  • Only Linux is supported and Windows via WSL2
    • x86_64 (64-bit)
    • aarch64 (64-bit)

Note: WSL2 is tested to run cuOpt, but not for building.

More details on system requirements can be found here

Pip

Pip wheels are easy to install and easy to configure. Users with existing workflows who uses pip as base to build their workflows can use pip to install cuOpt.

cuOpt can be installed via pip from the NVIDIA Python Package Index. Be sure to select the appropriate cuOpt package depending on the major version of CUDA available in your environment:

For CUDA 12.x:

pip install \
  --extra-index-url=https://pypi.nvidia.com \
  nvidia-cuda-runtime-cu12==12.9.* \
  cuopt-server-cu12==26.02.* cuopt-sh-client==26.02.*

Development wheels are available as nightlies, please update --extra-index-url to https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ to install latest nightly packages.

pip install --pre \
  --extra-index-url=https://pypi.nvidia.com \
  --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ \
  cuopt-server-cu12==26.02.* cuopt-sh-client==26.02.*

For CUDA 13.x:

pip install \
  --extra-index-url=https://pypi.nvidia.com \
  cuopt-server-cu13==26.02.* cuopt-sh-client==26.02.*

Development wheels are available as nightlies, please update --extra-index-url to https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ to install latest nightly packages.

pip install --pre \
  --extra-index-url=https://pypi.nvidia.com \
  --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ \
  cuopt-server-cu13==26.02.* cuopt-sh-client==26.02.*

Conda

cuOpt can be installed with conda (via miniforge):

All other dependencies are installed automatically when cuopt-server and cuopt-sh-client are installed.

conda install -c rapidsai -c conda-forge -c nvidia cuopt-server=26.02.* cuopt-sh-client=26.02.*

We also provide nightly conda packages built from the HEAD of our latest development branch. Just replace -c rapidsai with -c rapidsai-nightly.

Container

Users can pull the cuOpt container from the NVIDIA container registry.

# For CUDA 12.x
docker pull nvidia/cuopt:latest-cuda12.9-py3.13

# For CUDA 13.x
docker pull nvidia/cuopt:latest-cuda13.0-py3.13

Note: The latest tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the <version>-cuda12.9-py3.13 or <version>-cuda13.0-py3.13 tag. For example, to use cuOpt 25.10.0, you can use the 25.10.0-cuda12.9-py3.13 or 25.10.0-cuda13.0-py3.13 tag. Please refer to cuOpt dockerhub page <https://hub.docker.com/r/nvidia/cuopt/tags>_ for the list of available tags.

More information about the cuOpt container can be found here.

Users who are using cuOpt for quick testing or research can use the cuOpt container. Alternatively, users who are planning to plug cuOpt as a service in their workflow can quickly start with the cuOpt container. But users are required to build security layers around the service to safeguard the service from untrusted users.

Build from Source and Test

Please see our guide for building cuOpt from source. This will be helpful if users want to add new features or fix bugs for cuOpt. This would also be very helpful in case users want to customize cuOpt for their own use cases which require changes to the cuOpt source code.

Release Timeline

cuOpt follows the RAPIDS release schedule and is part of the "others" category in the release timeline. The release cycle consists of:

  • Development: Active feature development and bug fixes targeting main
  • Burn Down: Focus shifts to stabilization; new features should target the next release
  • Code Freeze: Only critical bug fixes allowed; PRs require admin approval
  • Release: Final testing, tagging, and official release

For current release timelines and dates, refer to the RAPIDS Maintainers Docs.

For AI Coding Agents

See AGENTS.md for agent-specific guidelines.

Contributing Guide

Review the CONTRIBUTING.md file for information on how to contribute code and issues to the project.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

libcuopt_cu13-26.2.0.tar.gz (3.7 kB view details)

Uploaded Source

File details

Details for the file libcuopt_cu13-26.2.0.tar.gz.

File metadata

  • Download URL: libcuopt_cu13-26.2.0.tar.gz
  • Upload date:
  • Size: 3.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.19

File hashes

Hashes for libcuopt_cu13-26.2.0.tar.gz
Algorithm Hash digest
SHA256 f3bb2bd75e323f9c781c22d96ce31d51440f3f23bda5d62f613df5ba39e6ac59
MD5 9dc001acd2deb8cf3cf2ad7e1ce5d1a9
BLAKE2b-256 84db660f239502dec7ef614bfc0b00b21f74c97f62a89d780aa787fb30de08b8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page