Skip to main content

ml4co-kit provides convenient dataset generators for the combinatorial optimization problem

Project description

PyPi version PyPI pyversions Downloads Documentation Status codecov GitHub stars

๐Ÿ“š Introductions

Combinatorial Optimization (CO) is a mathematical optimization area that involves finding the best solution from a large set of discrete possibilities, often under constraints. Widely applied in routing, logistics, hardware design, and biology, CO addresses NP-hard problems critical to computer science and industrial engineering.

ML4CO-Kit aims to provide foundational support for machine learning practices on CO problems. We have designed the ML4CO-Kit into five levels:

Organization
  • Task(Level 1): the smallest processing unit, where each task represents a problem instance. At the task level, it mainly involves the definition of CO problems, evaluation of solutions (including constraint checking), and problem visualization, etc.
  • Generator(Level 2): the generator creates task instances of a specific structure or distribution based on the set parameters.
  • Solver(Level 3): a variety of solvers. Different solvers, based on their scope of application, can solve specific types of task instances and can be combined with optimizers to further improve the solution results.
  • Optimizer(Level 4): to further optimize the initial solution obtained by the solver.
  • Wrapper(Level 5): user-friendly wrappers, used for handling data reading and writing, task storage, as well as parallelized generation and solving.

Additionally, for higher-level ML4CO (see ML4CO-Bench-101) services, we also provide learning base classes (see ml4co_kit/learning) based on the PyTorch-Lightning framework, including BaseEnv, BaseModel, Trainer. The following figure illustrates the relationship between the ML4CO-Kit and ML4CO-Bench-101.

Relation

We are still enriching the library and we welcome any contributions/ideas/suggestions from the community.

โญ Official Documentation: https://ml4co-kit.readthedocs.io/en/latest/

โญ Source Code: https://github.com/Thinklab-SJTU/ML4CO-Kit

๐Ÿš€ Installation

You can install the stable release on PyPI:

$ pip install ml4co-kit
pip

or get the latest version by running:

$ pip install -U https://github.com/Thinklab-SJTU/ML4CO-Kit/archive/master.zip # with --user for user install (no root)

The following packages are required and shall be automatically installed by pip:

Python>=3.8
numpy>=1.24.3
networkx>=2.8.8
tqdm>=4.66.3
cython>=3.0.8
pulp>=2.8.0, 
scipy>=1.10.1
aiohttp>=3.10.11
requests>=2.32.0
matplotlib>=3.7.0
async_timeout>=4.0.3
pyvrp>=0.6.3
gurobipy>=11.0.3
scikit-learn>=1.3.0
ortools>=9.12.4544
huggingface_hub>=0.32.0
setuptools>=75.0.0
PySCIPOpt>=5.6.0
pybind11>=3.0.1

To ensure you have access to all functions, you need to install the environment related to pytorch_lightning. We have provided an installation helper, and you can install it using the following code.

import sys
from packaging import version
from ml4co_kit import EnvInstallHelper


if __name__ == "__main__":
    # Get pytorch version
    python_version = sys.version.split()[0]
    
    # Get pytorch version
    if version.parse(python_version) < version.parse("3.12"):
        pytorch_version = "2.1.0"
    elif version.parse(python_version) < version.parse("3.13"):
        pytorch_version = "2.4.0"
    else:
        pytorch_version = "2.7.0"
    
    # Install pytorch environment
    env_install_helper = EnvInstallHelper(pytorch_version=pytorch_version)
    env_install_helper.install()

โš ๏ธ 2025-10-14: While testing the NVIDIA GeForce RTX 50-series GPUs, we have encountered the following error. To fix this issue, we recommend that you upgrade your driver to version 12.8 or later and download the corresponding PyTorch build from the official PyTorch website.

XXX with CUDA capability sm_120 is not compatible with the current PyTorch installation. 
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90.
import os

# download torch==2.8.0+cu128 from pytorch.org
os.system(f"pip install torch==2.8.0+cu128 --index-url https://download.pytorch.org/whl/cu128")

# download torch-X (scatter, sparse, spline-conv, cluster)
html_link = f"https://pytorch-geometric.com/whl/torch-2.8.0+cu128.html"
os.system(f"pip install --no-index torch-scatter -f {html_link}")
os.system(f"pip install --no-index torch-sparse -f {html_link}")
os.system(f"pip install --no-index torch-spline-conv -f {html_link}")
os.system(f"pip install --no-index torch-cluster -f {html_link}")

# wandb
os.system(f"pip install wandb>=0.20.0")

# pytorch-lightning
os.system(f"pip install pytorch-lightning==2.5.3")

# torch_geometric
os.system(f"pip install torch_geometric==2.7.0")

After the environment is installed, run the following command to confirm that the PyTorch build supports sm_120.

>>> import torch
>>> print(torch.cuda.get_arch_list())
['sm_70', 'sm_75', 'sm_80', 'sm_86', 'sm_90', 'sm_100', 'sm_120']

โš ๏ธ 2025-10-21: We find that on macOS, the gurobipy package does not support Python 3.8 or earlier. Therefore, please upgrade your Python to at least 3.9.

โš ๏ธ 2026-03-13: For Python versions 3.9 to 3.11 on macOS, it is necessary to downgrade setuptools (recommended versions: 75.0.0 ~ 80.9.0).

๐Ÿ“ ML4CO-Kit Development status

We will present the development progress of ML4CO-Kit in the above 5 levels.

Graph: MCl & MCut & MIS & MVC; Portfolio: MaxRetPO & MinVarPO & MOPO; SAT: SATA & SATP

โœ”: Supported; ๐Ÿ“†: Planned for future versions (contributions welcomed!).

Task (Level 1)
Task Definition Check Constraint Evaluation Render Special R/O
Routing Tasks
Asymmetric TSP (ATSP) โœ” โœ” โœ” ๐Ÿ“† tsplib
Capacitated Vehicle Routing Problem (CVRP) โœ” โœ” โœ” โœ” vrplib
Orienteering Problem (OP) โœ” โœ” โœ” ๐Ÿ“†
Prize Collection TSP (PCTSP) โœ” โœ” โœ” ๐Ÿ“†
Stochastic PCTSP (SPCTSP) โœ” โœ” โœ” ๐Ÿ“†
Traveling Salesman Problem (TSP) โœ” โœ” โœ” โœ” tsplib
Graph Tasks
Maximum Clique (MCl) โœ” โœ” โœ” โœ” gpickle, adj_matrix, networkx, csr
Maximum Cut (MCut) โœ” โœ” โœ” โœ” gpickle, adj_matrix, networkx, csr
Maximum Independent Set (MIS) โœ” โœ” โœ” โœ” gpickle, adj_matrix, networkx, csr
Minimum Vertex Cover (MVC) โœ” โœ” โœ” โœ” gpickle, adj_matrix, networkx, csr
QAP Tasks
Graph Matching (GM) โœ” โœ” โœ” ๐Ÿ“†
Graph Edit Distance (GED) โœ” โœ” โœ” ๐Ÿ“†
Koopmans-Beckmann QAP (KQAP) โœ” โœ” โœ” ๐Ÿ“†
SAT Tasks
Satisfiability Prediction (SATP) โœ” โœ” โœ” ๐Ÿ“† cnf
Satisfying Assignment Prediction (SATA) โœ” โœ” โœ” ๐Ÿ“† cnf
Portfolio Tasks
Maximum Return Portfolio Optimization (MaxRetPO) โœ” โœ” โœ” ๐Ÿ“†
Minimum Variance Portfolio Optimization (MinVarPO) โœ” โœ” โœ” ๐Ÿ“†
Multi-Objective Portfolio Optimization (MOPO) โœ” โœ” โœ” ๐Ÿ“†

Generator (Level 2)
Task Distribution Brief Intro. State
Routing Tasks
ATSP Uniform Random distance matrix with triangle inequality โœ”
SAT SAT problem transformed to ATSP โœ”
HCP Hamiltonian Cycle Problem transformed to ATSP โœ”
CVRP Uniform Random coordinates with uniform distribution โœ”
Gaussian Random coordinates with Gaussian distribution โœ”
OP Uniform Random prizes with uniform distribution โœ”
Constant All prizes are constant โœ”
Distance Prizes based on distance from depot โœ”
PCTSP Uniform Random prizes with uniform distribution โœ”
SPCTSP Uniform Random prizes with uniform distribution โœ”
TSP Uniform Random coordinates with uniform distribution โœ”
Gaussian Random coordinates with Gaussian distribution โœ”
Cluster Coordinates clustered around random centers โœ”
Graph Tasks
(Graph) ER (structure) Erdos-Renyi random graph โœ”
BA (structure) Barabasi-Albert scale-free graph โœ”
HK (structure) Holme-Kim small-world graph โœ”
WS (structure) Watts-Strogatz small-world graph โœ”
RB (structure) RB-Model graph โœ”
Uniform (weighted) Weights with Uniform distribution โœ”
Gaussian (weighted) Weights with Gaussian distribution โœ”
Poisson (weighted) Weights with Poisson distribution โœ”
Exponential (weighted) Weights with Exponential distribution โœ”
Lognormal (weighted) Weights with Lognormal distribution โœ”
Powerlaw (weighted) Weights with Powerlaw distribution โœ”
Binomial (weighted) Weights with Binomial distribution โœ”
QAP Tasks
GM ISO Isomorphic Graph matching โœ”
GM SUB Subgraph Graph matching โœ”
SAT Tasks
(SAT) PHASE Near satisfiability phase transition โœ”
SR SAT/UNSAT paired generation โœ”
CA Community Attachment generator โœ”
PS Popularity Similarity generator โœ”
K_CLIQUE Reduction-based SAT instance generation โœ”
K_CLIQUE Reduction-based SAT instance generation โœ”
K_CLIQUE Reduction-based SAT instance generation โœ”
Portfolio Tasks
(Portfolio) GBM Geometric Brownian Motion model โœ”
Factor Factor model with k factors and idiosyncratic noise โœ”
VAR(1) Vector Autoregressive model of order 1 โœ”
MVT Multivariate T distribution model โœ”
GRACH GARCH model for volatility clustering โœ”
Jump Merton Jump-Diffusion model โœ”
Regime Regime-Switching model with multiple states โœ”

Solver (Level 3)
Solver Support Task Language Source Ref. / Implementation State
ConcordeSolver TSP C/C++ Concorde PyConcorde โœ”
GAEAXSolver TSP C/C++ GA-EAX GA-EAX โœ”
GNN4COSolver(Beam) MCl Python ML4CO-Kit ML4CO-Kit โœ”
MIS Python ML4CO-Kit ML4CO-Kit โœ”
GNN4COSolver(Greedy) ATSP C/C++ ML4CO-Kit ML4CO-Kit โœ”
CVRP Python ML4CO-Kit ML4CO-Kit โœ”
TSP Cython DIFUSCO DIFUSCO โœ”
MCl Python ML4CO-Kit ML4CO-Kit โœ”
MCut Python ML4CO-Kit ML4CO-Kit โœ”
MIS Python ML4CO-Kit ML4CO-Kit โœ”
MVC Python ML4CO-Kit ML4CO-Kit โœ”
GNN4COSolver(MCTS) TSP Python Att-GCRN ML4CO-Kit โœ”
GpDegreeSolver MCl Python ML4CO-Kit ML4CO-Kit โœ”
MIS Python ML4CO-Kit ML4CO-Kit โœ”
MVC Python ML4CO-Kit ML4CO-Kit โœ”
GurobiSolver ATSP C/C++ Gurobi ML4CO-Kit โœ”
CVRP C/C++ Gurobi ML4CO-Kit โœ”
OP C/C++ Gurobi ML4CO-Kit โœ”
TSP C/C++ Gurobi ML4CO-Kit โœ”
MCl C/C++ Gurobi DIffUCO โœ”
MCut C/C++ Gurobi DIffUCO โœ”
MIS C/C++ Gurobi DIffUCO โœ”
MVC C/C++ Gurobi DIffUCO โœ”
MaxRetPO C/C++ Gurobi ML4CO-Kit โœ”
MinVarPO C/C++ Gurobi ML4CO-Kit โœ”
MOPO C/C++ Gurobi ML4CO-Kit โœ”
HGSSolver CVRP C/C++ HGS-CVRP HGS-CVRP โœ”
ILSSolver PCTSP Python PCTSP PCTSP โœ”
SPCTSP Python Attention Attention โœ”
InsertionSolver TSP Python GLOP GLOP โœ”
ISCOSolver MCl Python ISCO DISCS โœ”
MCut Python ISCO DISCS โœ”
MIS Python ISCO DISCS โœ”
MVC Python ISCO DISCS โœ”
FEMSolver MCut Python FEM ML4CO-Kit โœ”
KaMISSolver MIS Python KaMIS MIS-Bench โœ”
LcDegreeSolver MCl Python ML4CO-Kit ML4CO-Kit โœ”
MCut Python ML4CO-Kit ML4CO-Kit โœ”
MIS Python ML4CO-Kit ML4CO-Kit โœ”
MVC Python ML4CO-Kit ML4CO-Kit โœ”
LKHSolver TSP C/C++ LKH ML4CO-Kit โœ”
ATSP C/C++ LKH ML4CO-Kit โœ”
CVRP C/C++ LKH ML4CO-Kit โœ”
NeuroLKHSolver TSP Python NeuroLKH ML4CO-Kit โœ”
ORSolver ATSP C/C++ OR-Tools ML4CO-Kit โœ”
OP C/C++ OR-Tools ML4CO-Kit โœ”
PCTSP C/C++ OR-Tools ML4CO-Kit โœ”
TSP C/C++ OR-Tools ML4CO-Kit โœ”
MCl C/C++ OR-Tools ML4CO-Kit โœ”
MIS C/C++ OR-Tools ML4CO-Kit โœ”
MVC C/C++ OR-Tools ML4CO-Kit โœ”
PyGMSolver GM Python pygmtools ML4CO-Kit โœ”
GED Python pygmtools ML4CO-Kit โœ”
KQAP Python pygmtools ML4CO-Kit โœ”
PySATSolver SATP Python PySAT ML4CO-Kit โœ”
SATA Python PySAT ML4CO-Kit โœ”
RLSASolver MCl Python RLSA ML4CO-Kit โœ”
MCut Python RLSA ML4CO-Kit โœ”
MIS Python RLSA ML4CO-Kit โœ”
SCIPSolver MaxRetPO C/C++ PySCIPOpt ML4CO-Kit โœ”
MinVarPO C/C++ PySCIPOpt ML4CO-Kit โœ”
MOPO C/C++ PySCIPOpt ML4CO-Kit โœ”

Optimizer (Level 4)
Optimizer Support Task IMPL Source Ref. / Implementation State
CVRPLSOptimizer CVRP Ctypes HGS-CVRP ML4CO-Kit โœ”
ISCOOptimizer MCl Numpy ISCO DISCS โœ”
MCut Numpy ISCO DISCS โœ”
MIS Numpy ISCO DISCS โœ”
MVC Numpy ISCO DISCS โœ”
MCTSOptimizer TSP Ctypes Att-GCRN ML4CO-Kit โœ”
TwoOptOptimizer ATSP Ctypes ML4CO-Kit ML4CO-Kit โœ”
TSP Torch DIFUSCO ML4CO-Kit โœ”
TSP Pybind11 GenSCO GenSCO โœ”
FastTwoOptOptimizer TSP Pybind11 ML4CO-Kit ML4CO-Kit โœ”

Wrapper (Level 5)
Wrapper TXT Other R&W
Routing Tasks
ATSPWrapper "[dists] output [sol]" tsplib
CVRPWrapper "depots [depots] points [points] demands [demands] capacity [capacity] output [sol]" vrplib
ORWrapper "depots [depots] points [points] prizes [prizes] max_length [max_length] output [sol]"
PCTSPWrapper "depots [depots] points [points] penalties [penalties] prizes [prizes] required_prize [required_prize] output [sol]"
SPCTSPWrapper "depots [depots] points [points] penalties [penalties] expected_prizes [expected_prizes] actual_prizes [actual_prizes] required_prize [required_prize] output [sol]"
TSPWrapper "[points] output [sol]" tsplib
Graph Tasks
(Graph)Wrapper "[edge_index] label [sol]" gpickle
(Graph)Wrapper [weighted] "[edge_index] weights [weights] label [sol]" gpickle
QAP Tasks
GMWrapper -- pickle
GEDWrapper -- pickle
KQAPWrapper -- pickle
SAT Tasks
SATPWrapper "[vars_num] vars_num [clauses] output [sol]" cnf
SATAWrapper "[vars_num] vars_num [clauses] output [sol]" cnf
Portfolio Tasks
MaxRetPOWrapper "[returns] cov [cov] max_var [max_var] output [sol]"
MinVarPOWrapper "[returns] cov [cov] required_returns [required_returns] output [sol]"
MOPOWrapper "[returns] cov [cov] var_factor [var_factor] output [sol]"

๐Ÿ”Ž How to use ML4CO-Kit

Case-01: How to use ML4CO-Kit to generate a dataset
# We take the TSP as an example

# Import the required classes.
>>> import numpy as np                  # Numpy
>>> from ml4co_kit import TSPWrapper    # The wrapper for TSP, used to manage data and parallel generation.
>>> from ml4co_kit import TSPGenerator  # The generator for TSP, used to generate a single instance.
>>> from ml4co_kit import TSP_TYPE      # The distribution types supported by the generator.
>>> from ml4co_kit import LKHSolver     # We choose LKHSolver to solve TSP instances

# Check which distributions are supported by the TSP types.
>>> for type in TSP_TYPE:
...     print(type)
TSP_TYPE.UNIFORM
TSP_TYPE.GAUSSIAN
TSP_TYPE.CLUSTER

# Set the generator parameters according to the requirements.
>>> tsp_generator = TSPGenerator(
...     distribution_type=TSP_TYPE.GAUSSIAN,   # Generate a TSP instance with a Gaussian distribution
...     precision=np.float32,                  # Floating-point precision: 32-bit
...     nodes_num=50,                          # Number of nodes in TSP instance
...     gaussian_mean_x=0,                     # Mean of Gaussian for x coordinate
...     gaussian_mean_y=0,                     # Mean of Gaussian for y coordinate
...     gaussian_std=1,                        # Standard deviation of Gaussian
... )

# Set the LKH parameters.
>>> tsp_solver = LKHSolver(
...     lkh_scale=1e6,        # Scaling factor to convert floating-point numbers to integers
...     lkh_max_trials=500,   # Maximum number of trials for the LKH algorithm
...     lkh_path="LKH",       # Path to the LKH executable
...     lkh_runs=1,           # Number of runs for the LKH algorithm
...     lkh_seed=1234,        # Random seed for the LKH algorithm
...     lkh_special=False,    # When set to True, disables 2-opt and 3-opt heuristics
... )

# Create the TSP wrapper
>>> tsp_wrapper = TSPWrapper(precision=np.float32)

# Use ``generate_w_to_txt`` to generate a dataset of TSP.
>>> tsp_wrapper.generate_w_to_txt(
...     file_path="tsp_gaussian_16ins.txt",  # Path to the output file where the generated TSP instances will be saved
...     generator=tsp_generator,             # The TSP instance generator to use
...     solver=tsp_solver,                   # The TSP solver to use
...     num_samples=16,                      # Number of TSP instances to generate
...     num_threads=4,                       # Number of CPU threads to use for parallelization; cannot both be non-1 with batch_size
...     batch_size=1,                        # Batch size for parallel processing; cannot both be non-1 with num_threads
...     write_per_iters=1,                   # Number of sub-generation steps after which data will be written to the file
...     write_mode="a",                      # Write mode for the output file ("a" for append)
...     show_time=True,                      # Whether to display the time taken for the generation process
... )
Generating TSP: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4/4 [00:00<00:00, 12.79it/s]

Case-02: How to use ML4CO-Kit to load problems and solve them
# We take the MIS as an example

# Import the required classes.
>>> import numpy as np                  # Numpy
>>> from ml4co_kit import MISWrapper    # The wrapper for MIS, used to manage data and parallel solving.
>>> from ml4co_kit import KaMISSolver   # We choose KaMISSolver to solve MIS instances

# Set the KaMIS parameters.
>>> mis_solver = KaMISSolver(
...     kamis_time_limit=10.0,          # The maximum solution time for a single problem
...     kamis_weighted_scale=1e5,       # Weight scaling factor, used when nodes have weights.
... )

# Create the MIS wrapper
>>> mis_wrapper = MISWrapper(precision=np.float32)

# Load the problems to be solved.
# You can use the corresponding loading function based on the file type, 
# such as ``from_txt`` for txt file and ``from_pickle`` for pickle file.
>>> mis_wrapper.from_txt(
...     file_path="test_dataset/mis/wrapper/mis_rb-small_uniform-weighted_4ins.txt",
...     ref=True,          # TXT file contains labels. Set ``ref=True`` to set them as reference.
...     overwrite=True,    # Whether to overwrite the data. If not, only update according to the file data.
...     show_time=True     # Whether to display the time taken for the loading process
... )
Loading data from test_dataset/mis/wrapper/mis_rb-small_uniform-weighted_4ins.txt: 4it [00:00, 75.41it/s]

# Use ``solve`` to call the KaMISSolver to perform the solution.
>>> mis_wrapper.solve(
...     solver=mis_solver,                   # The solver to use
...     num_threads=2,                       # Number of CPU threads to use for parallelization; cannot both be non-1 with batch_size
...     batch_size=1,                        # Batch size for parallel processing; cannot both be non-1 with num_threads
...     show_time=True,                      # Whether to display the time taken for the generation process
... )
Solving MIS Using kamis: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:21<00:00, 10.97s/it]
Using Time: 21.947036743164062

# Use ``evaluate_w_gap`` to obtain the evaluation results.
# Evaluation Results: average solution value, average reference value, gap (%), gap std.
>>> eval_result = mis_wrapper.evaluate_w_gap()
>>> print(eval_result)
(14.827162742614746, 15.18349838256836, 2.5054726600646973, 2.5342845916748047)

Case-03: How to use ML4CO-Kit to visualize the COPs
# We take the CVRP as an example

# Import the required classes.
>>> import numpy as np                  # Numpy
>>> from ml4co_kit import CVRPTask      # CVRP Task. 
>>> from ml4co_kit import CVRPWrapper   # The wrapper for CVRP, used to manage data.

# Case-1: multiple task data are saved in ``txt``, ``pickle``, etc. single task data is saved in pickle.
>>> cvrp_wrapper = CVRPWrapper()
>>> cvrp_wrapper.from_pickle("test_dataset/cvrp/wrapper/cvrp50_uniform_16ins.pkl")
>>> cvrp_task = cvrp_wrapper.task_list[0]
>>> print(cvrp_task)
CVRPTask(2fb389cdafdb4e79a94572f01edf0b95)

# Case-2: single task data is saved in pickle.
>>> cvrp_task = CVRPTask()
>>> cvrp_task.from_pickle("test_dataset/cvrp/task/cvrp50_uniform_task.pkl")
>>> print(cvrp_task)
CVRPTask(2fb389cdafdb4e79a94572f01edf0b95)

# The loaded solution is usually a reference solution. 
# When drawing the image, it is the ``sol`` that is being drawn. 
# Therefore, it is necessary to assign ``ref_sol`` to ``sol``.
>>> cvrp_task.sol = cvrp_task.ref_sol

# Using ``render`` to get the visualization
>>> cvrp_task.render(
...     save_path="./docs/assets/cvrp_solution.png",  # Path to save the rendered image
...     with_sol=True,                                # Whether to draw the solution tour
...     figsize=(10, 10),                             # Size of the image (width and height)
...     node_color="darkblue",                        # Color of the nodes
...     edge_color="darkblue",                        # Color of the edges
...     node_size=50                                  # Size of the nodes
... )
pip

Case-04: A simple ML4CO example
# We take the MCut as an example

# Import the required classes.
>>> import numpy as np                   # Numpy
>>> from ml4co_kit import TASK_TYPE      # The task type.
>>> from ml4co_kit import MCutWrapper    # The wrapper for MCutWrapper, used to manage data.
>>> from ml4co_kit import GNN4COSolver   # GNN4COSolver.
>>> from ml4co_kit import RLSAOptimizer  # Using RLSA to perform local search.
>>> from ml4co_kit.extension.gnn4co import GNN4COModel, GNN4COEnv, GNNEncoder, GNN4COGreedyDecoder

# Set the GNN4COModel parameters. ``weight_path``: Pretrain weight path. 
# If it is not available locally, it will be automatically downloaded from Hugging Face.
>>> gnn4mcut_model = GNN4COModel(
...     env=GNN4COEnv(
...         task_type=TASK_TYPE.MCUT,    # Task type: MCut. 
...         wrapper=MCutWrapper(),       # The wrapper for MCutWrapper, used to manage data.
...         mode="solve",                # Mode: solving mode.
...         sparse_factor=1,             # Sparse factor: Controls the sparsity of the graph.
...         device="cuda"                # Device: 'cuda' or 'cpu'
...     ),
...     encoder=GNNEncoder(
...         task_type=TASK_TYPE.MCUT,    # Task type: MCut.
...         sparse=True,                 # Graph data should set ``sparse`` to True.
...         block_layers=[2,4,4,2]       # Block layers: the number of layers in each block of the encoder.
...     ),
...     decoder=GNN4COGreedyDecoder(sparse_factor=1),
...     weight_path="weights/gnn4co_mcut_ba-large_sparse.pt"   
... )
gnn4co/gnn4co_mcut_ba-large_sparse.pt:โ€‡100%โ€‡โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ 19.6M/19.6Mโ€‡[00:03<00:00,โ€‡6.18MB/s]

# Set the RLSAOptimizer parameters.
>>> mcut_optimizer = RLSAOptimizer(
...     rlsa_kth_dim="both",          # Which dimension to consider for the k-th value calculation.
...     rlsa_tau=0.01,                # The temperature parameter in the Simulated Annealing process.
...     rlsa_d=2,                     # Control the step size of each update.
...     rlsa_k=1000,                  # The number of samples used in the optimization process.
...     rlsa_t=1000,                  # The number of iterations in the optimization process.
...     rlsa_device="cuda",           # Device: 'cuda' or 'cpu'.
...     rlsa_seed=1234                # The random seed for reproducibility.
... )

# Set the GNN4COSolver parameters.
>>> mcut_solver_wo_opt = GNN4COSolver(
...     model=gnn4mcut_model,         # GNN4CO model for MCut
...     device="cuda",                # Device: 'cuda' or 'cpu'.
...     optimizer=None                # The optimizer to perform local search.
... )
>>> mcut_solver_w_opt = GNN4COSolver(
...     model=gnn4mcut_model,         # GNN4CO model for MCut
...     device="cuda",                # Device: 'cuda' or 'cpu'.
...     optimizer=mcut_optimizer      # The optimizer to perform local search.
... )

# Create the MCut wrapper
>>> mcut_wrapper = MCutWrapper(precision=np.float32)

# Load the problems to be solved.
# You can use the corresponding loading function based on the file type, 
# such as ``from_txt`` for txt file and ``from_pickle`` for pickle file.
>>> mcut_wrapper.from_txt(
...     file_path="test_dataset/mcut/wrapper/mcut_ba-large_no-weighted_4ins.txt",
...     ref=True,          # TXT file contains labels. Set ``ref=True`` to set them as reference.
...     overwrite=True,    # Whether to overwrite the data. If not, only update according to the file data.
...     show_time=True     # Whether to display the time taken for the loading process
... )
Loading data from test_dataset/mcut/wrapper/mcut_ba-large_no-weighted_4ins.txt: 4it [00:00, 16.35it/s]

# Using ``solve`` to get the solution (without optimizer)
>>> mcut_wrapper.solve(
...     solver=mcut_solver_wo_opt,    # The solver to use
...     num_threads=1,                # Number of CPU threads to use for parallelization; cannot both be non-1 with batch_size
...     batch_size=1,                 # Batch size for parallel processing; cannot both be non-1 with num_threads
...     show_time=True,               # Whether to display the time taken for the generation process
... )
Solving MCut Using greedy: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4/4 [00:00<00:00, 12.34it/s]
Using Time: 0.3261079788208008

# Use ``evaluate_w_gap`` to obtain the evaluation results.
# Evaluation Results: average solution value, average reference value, gap (%), gap std.
>>> eval_result = mcut_wrapper.evaluate_w_gap()
>>> print(eval_result)
(2647.25, 2726.5, 2.838811523236064, 0.7528157058230817)

# Using ``solve`` to get the solution (with optimizer)
>>> mcut_wrapper.solve(
...     solver=mcut_solver_w_opt,     # The solver to use
...     num_threads=1,                # Number of CPU threads to use for parallelization; cannot both be non-1 with batch_size
...     batch_size=1,                 # Batch size for parallel processing; cannot both be non-1 with num_threads
...     show_time=True,               # Whether to display the time taken for the generation process
... )
Solving MCut Using greedy: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4/4 [00:02<00:00,  1.46it/s]
Using Time: 2.738525867462158

# Use ``evaluate_w_gap`` to obtain the evaluation results.
# Evaluation Results: average solution value, average reference value, gap (%), gap std.
>>> eval_result = mcut_wrapper.evaluate_w_gap()
>>> print(eval_result)
(2693.0, 2726.5, 1.2373146256952277, 0.29320238806274546)

๐Ÿ“ˆ Our Systematic Benchmark Works

We are systematically building a foundational framework for ML4CO with a collection of resources that complement each other in a cohesive manner.

  • Awesome-ML4CO, a curated collection of literature in the ML4CO field, organized to support researchers in accessing both foundational and recent developments.

  • ML4CO-Kit, a general-purpose toolkit that provides implementations of common algorithms used in ML4CO, along with basic training frameworks, traditional solvers and data generation tools. It aims to simplify the implementation of key techniques and offer a solid base for developing machine learning models for COPs.

  • ML4TSPBench: a benchmark focusing on exploring the TSP for representativeness. It advances a unified modular streamline incorporating existing tens of technologies in both learning and search for transparent ablation, aiming to reassess the role of learning and to discern which parts of existing techniques are genuinely beneficial and which are not. It offers a deep dive into various methodology designs, enabling comparisons and the development of specialized algorithms.

  • ML4CO-Bench-101: a benchmark that categorizes neural combinatorial optimization (NCO) solvers by solving paradigms, model designs, and learning strategies. It evaluates applicability and generalization of different NCO approaches across a broad range of combinatorial optimization problems to uncover universal insights that can be transferred across various domains of ML4CO.

  • PredictiveCO-Benchmark: a benchmark for decision-focused learning (DFL) approaches on predictive combinatorial optimization problems.

โœจ Citation

If you find our code helpful in your research, please cite

@inproceedings{
    ma2025mlcobench,
    title={ML4CO-Bench-101: Benchmark Machine Learning for Classic Combinatorial Problems on Graphs},
    author={Jiale Ma and Wenzheng Pan and Yang Li and Junchi Yan},
    booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
    year={2025},
    url={https://openreview.net/forum?id=ye4ntB1Kzi}
}

@inproceedings{li2025unify,
  title={Unify ml4tsp: Drawing methodological principles for tsp and beyond from streamlined design space of learning and search},
  author={Li, Yang and Ma, Jiale and Pan, Wenzheng and Wang, Runzhong and Geng, Haoyu and Yang, Nianzu and Yan, Junchi},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ml4co_kit-0.5.0.tar.gz (4.3 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

ml4co_kit-0.5.0-cp313-cp313-manylinux2014_x86_64.whl (10.5 MB view details)

Uploaded CPython 3.13

ml4co_kit-0.5.0-cp313-cp313-macosx_15_0_universal2.whl (8.1 MB view details)

Uploaded CPython 3.13macOS 15.0+ universal2 (ARM64, x86-64)

ml4co_kit-0.5.0-cp312-cp312-manylinux2014_x86_64.whl (10.5 MB view details)

Uploaded CPython 3.12

ml4co_kit-0.5.0-cp312-cp312-macosx_15_0_universal2.whl (8.1 MB view details)

Uploaded CPython 3.12macOS 15.0+ universal2 (ARM64, x86-64)

ml4co_kit-0.5.0-cp311-cp311-manylinux2014_x86_64.whl (10.5 MB view details)

Uploaded CPython 3.11

ml4co_kit-0.5.0-cp311-cp311-macosx_15_0_universal2.whl (8.1 MB view details)

Uploaded CPython 3.11macOS 15.0+ universal2 (ARM64, x86-64)

ml4co_kit-0.5.0-cp310-cp310-manylinux2014_x86_64.whl (10.3 MB view details)

Uploaded CPython 3.10

ml4co_kit-0.5.0-cp310-cp310-macosx_15_0_universal2.whl (7.9 MB view details)

Uploaded CPython 3.10macOS 15.0+ universal2 (ARM64, x86-64)

ml4co_kit-0.5.0-cp39-cp39-manylinux2014_x86_64.whl (10.3 MB view details)

Uploaded CPython 3.9

ml4co_kit-0.5.0-cp39-cp39-macosx_15_0_universal2.whl (7.9 MB view details)

Uploaded CPython 3.9macOS 15.0+ universal2 (ARM64, x86-64)

ml4co_kit-0.5.0-cp38-cp38-manylinux2014_x86_64.whl (10.3 MB view details)

Uploaded CPython 3.8

File details

Details for the file ml4co_kit-0.5.0.tar.gz.

File metadata

  • Download URL: ml4co_kit-0.5.0.tar.gz
  • Upload date:
  • Size: 4.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for ml4co_kit-0.5.0.tar.gz
Algorithm Hash digest
SHA256 4846f28b49d08cb6cbcd4c127d12d6493ad97ffa1c37ef48441323ec5d11c9bc
MD5 4cf4e5f61b872926e1a9fb1ebcba4bed
BLAKE2b-256 2d2dbe280dc0beedf470cda8bee66082023238770e5d41b7c9c686e5c9b151a1

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.5.0-cp313-cp313-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.5.0-cp313-cp313-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 bf8b5c082577ce116c3a8644eabea826d5469f4951f240e2da50eb535dbb28ad
MD5 3713c1a557d5be1802096cb9b0f80689
BLAKE2b-256 52cd9dbfa06b17f32618e96ff443e3f51bfad9362f2399542e7b55e0273e243b

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.5.0-cp313-cp313-macosx_15_0_universal2.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.5.0-cp313-cp313-macosx_15_0_universal2.whl
Algorithm Hash digest
SHA256 e1019585f270f50cc058f7b650bb4239aab860fee474d03e5979f4571983e9f0
MD5 6a0f20be5f55c01acadc8cab745d117f
BLAKE2b-256 292f9f974f618112b19e4ee714a293fb0071817b270a14cfec212de605f6b6fd

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.5.0-cp312-cp312-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.5.0-cp312-cp312-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 042856674f83fd5cf00efb3b60aa023dfcdfe4e395a4e26a320dd05b4744fa50
MD5 4b8a4e878273a3433803d4bf4777faa7
BLAKE2b-256 7de00950152b24530168b4de25f4ef2724ea1a4c3bdd3f47860a57d9a29fd2f0

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.5.0-cp312-cp312-macosx_15_0_universal2.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.5.0-cp312-cp312-macosx_15_0_universal2.whl
Algorithm Hash digest
SHA256 42e691fe0c5e274b4607701075352583d86d3cf603e394a4798d1aa51c420d3e
MD5 5705c1ff704e9522a9dea2ef799e6598
BLAKE2b-256 6b89a4eec221b4a746b2f4d01d57b87dcac88741edd14a3a1d7fcb72250a015b

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.5.0-cp311-cp311-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.5.0-cp311-cp311-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 27166f9ecbbcb7acbbd846a32b0c05dbafc4892372cb92802b7aae72b6c0da5a
MD5 79e6295ec99721e28496a79b80f7eed6
BLAKE2b-256 25b9de14c3462eb7ca0dd63bbda0d2b7ed6c36f14b00991b1acc34dcbc29233e

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.5.0-cp311-cp311-macosx_15_0_universal2.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.5.0-cp311-cp311-macosx_15_0_universal2.whl
Algorithm Hash digest
SHA256 3c21912a100bc186043ad3e01af5284445a6e46297ae62c87f91174654739328
MD5 2d5859ee3901a3eedb520a89f8f85673
BLAKE2b-256 79fc3f6874705ec0ab5d5b73d75a3ab45a99c2f9a263169c0814d365e3191f0b

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.5.0-cp310-cp310-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.5.0-cp310-cp310-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e204052827a893e6a8e8066cd8a295a0a1d8ad401f437563e71457c8d51480b3
MD5 a8fc0aefa8e4591681323c3660f071a7
BLAKE2b-256 08079b23ea58f68b79d9cae40ce201e5feb29c73824b365f896358db246c6b6d

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.5.0-cp310-cp310-macosx_15_0_universal2.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.5.0-cp310-cp310-macosx_15_0_universal2.whl
Algorithm Hash digest
SHA256 5604a3ad2e8dea210231a794a126e51535b998c5134cbaba34048445dcc80055
MD5 6e2bb4f393ea2e12915572afc2f16061
BLAKE2b-256 e77cc1a63359c3007c95f4f785267d946f49d1ae200dbc31968356d6715af67a

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.5.0-cp39-cp39-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.5.0-cp39-cp39-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 06ee266366773ca063308ff2821d4fcd0eda2453acbe70ffee2cb667016343d6
MD5 038a4501d88ce4e1e1eeeeb573a6f336
BLAKE2b-256 c1c870d2e68e816033104e5c0b87f21dc1616525702461ad51b86b36666b5511

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.5.0-cp39-cp39-macosx_15_0_universal2.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.5.0-cp39-cp39-macosx_15_0_universal2.whl
Algorithm Hash digest
SHA256 e001644d5641846f3d948bfb7bb6ecaf57741af9af9fe51d9e39033ba5216d2a
MD5 ef117005eeafe3e913768f6d62b2d636
BLAKE2b-256 46cc065162427fef1c51f0b42e4635d786f926b29507ff761669db3c82bb5c96

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.5.0-cp38-cp38-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.5.0-cp38-cp38-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 fd66d8e7f670f97ebd5d13f67fdeb30a1652033378af5ee41fffdfd1fdbc8d9c
MD5 443c56cbd2254856fcc3917c2fa2db0d
BLAKE2b-256 206b9724291effc726e8c5764e85d85385a95a9e712d203da392189f33f0f906

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page