NVIDIA® cuOpt™ is a GPU-accelerated optimization engine that excels in mixed integer linear programming (MILP), linear programming (LP), and vehicle routing problems (VRP). It enables near real-time solutions for large-scale challenges with millions of variables and constraints, offering easy integration into existing solvers and seamless deployment across hybrid and multi-cloud environments.
The core engine is written in C++ and wrapped with a C API, Python API and Server API.
For the latest stable version ensure you are on the main
branch.
cuOpt supports the following APIs:
This repo is also hosted as a COIN-OR project.
Note: WSL2 is tested to run cuOpt, but not for building.
More details on system requirements can be found here
Pip wheels are easy to install and easy to configure. Users with existing workflows who uses pip as base to build their workflows can use pip to install cuOpt.
cuOpt can be installed via pip
from the NVIDIA Python Package Index.
Be sure to select the appropriate cuOpt package depending
on the major version of CUDA available in your environment:
For CUDA 12.x:
pip install --extra-index-url=https://pypi.nvidia.com cuopt-server-cu12==25.10.* cuopt-sh-client==25.10.* nvidia-cuda-runtime-cu12==12.9.*
Development wheels are available as nightlies, please update --extra-index-url
to https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/
to install latest nightly packages.
cuOpt can be installed with conda (via miniforge):
All other dependencies are installed automatically when cuopt-server
and cuopt-sh-client
are installed.
conda install -c rapidsai -c conda-forge -c nvidia cuopt-server=25.10.* cuopt-sh-client=25.10.*
We also provide nightly conda packages built from the HEAD
of our latest development branch. Just replace -c rapidsai
with -c rapidsai-nightly
.
Users can pull the cuOpt container from the NVIDIA container registry.
docker pull nvidia/cuopt:latest-cuda12.9-py312
Note: The latest
tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the <version>-cuda12.9-py312
tag. For example, to use cuOpt 25.5.0, you can use the 25.5.0-cuda12.8-py312
tag. Please refer to cuOpt dockerhub page <https://hub.docker.com/r/nvidia/cuopt>
_ for the list of available tags.
More information about the cuOpt container can be found here.
Users who are using cuOpt for quick testing or research can use the cuOpt container. Alternatively, users who are planning to plug cuOpt as a service in their workflow can quickly start with the cuOpt container. But users are required to build security layers around the service to safeguard the service from untrusted users.
Please see our guide for building cuOpt from source. This will be helpful if users want to add new features or fix bugs for cuOpt. This would also be very helpful in case users want to customize cuOpt for their own use cases which require changes to the cuOpt source code.
Review the CONTRIBUTING.md file for information on how to contribute code and issues to the project.
Runtime
as GPU
in order to run the notebooks.