Skip to main content

ml4co-kit provides convenient dataset generators for the combinatorial optimization problem

Project description

PyPi version PyPI pyversions Downloads Documentation Status codecov GitHub stars

๐Ÿ“š Introductions

Combinatorial Optimization (CO) is a mathematical optimization area that involves finding the best solution from a large set of discrete possibilities, often under constraints. Widely applied in routing, logistics, hardware design, and biology, CO addresses NP-hard problems critical to computer science and industrial engineering.

ML4CO-Kit aims to provide foundational support for machine learning practices on CO problems. We have designed the ML4CO-Kit into five levels:

Organization
  • Task(Level 1): the smallest processing unit, where each task represents a problem instance. At the task level, it mainly involves the definition of CO problems, evaluation of solutions (including constraint checking), and problem visualization, etc.
  • Generator(Level 2): the generator creates task instances of a specific structure or distribution based on the set parameters.
  • Solver(Level 3): a variety of solvers. Different solvers, based on their scope of application, can solve specific types of task instances and can be combined with optimizers to further improve the solution results.
  • Optimizer(Level 4): to further optimize the initial solution obtained by the solver.
  • Wrapper(Level 5): user-friendly wrappers, used for handling data reading and writing, task storage, as well as parallelized generation and solving.

Additionally, for higher-level ML4CO (see ML4CO-Bench-101) services, we also provide learning base classes (see ml4co_kit/learning) based on the PyTorch-Lightning framework, including BaseEnv, BaseModel, Trainer. The following figure illustrates the relationship between the ML4CO-Kit and ML4CO-Bench-101.

Relation

We are still enriching the library and we welcome any contributions/ideas/suggestions from the community.

โญ Official Documentation: https://ml4co-kit.readthedocs.io/en/latest/

โญ Source Code: https://github.com/Thinklab-SJTU/ML4CO-Kit

๐Ÿš€ Installation

You can install the stable release on PyPI:

$ pip install ml4co-kit
pip

or get the latest version by running:

$ pip install -U https://github.com/Thinklab-SJTU/ML4CO-Kit/archive/master.zip # with --user for user install (no root)

The following packages are required and shall be automatically installed by pip:

Python>=3.8
numpy>=1.24.3
networkx>=2.8.8
tqdm>=4.66.3
cython>=3.0.8
pulp>=2.8.0, 
scipy>=1.10.1
aiohttp>=3.10.11
requests>=2.32.0
matplotlib>=3.7.0
async_timeout>=4.0.3
pyvrp>=0.6.3
gurobipy>=11.0.3
scikit-learn>=1.3.0
ortools>=9.12.4544
huggingface_hub>=0.32.0
setuptools>=75.0.0
PySCIPOpt>=5.6.0

To ensure you have access to all functions, you need to install the environment related to pytorch_lightning. We have provided an installation helper, and you can install it using the following code.

import sys
from packaging import version
from ml4co_kit import EnvInstallHelper


if __name__ == "__main__":
    # Get pytorch version
    python_version = sys.version.split()[0]
    
    # Get pytorch version
    if version.parse(python_version) < version.parse("3.12"):
        pytorch_version = "2.1.0"
    elif version.parse(python_version) < version.parse("3.13"):
        pytorch_version = "2.4.0"
    else:
        pytorch_version = "2.7.0"
    
    # Install pytorch environment
    env_install_helper = EnvInstallHelper(pytorch_version=pytorch_version)
    env_install_helper.install()

โš ๏ธ 2025-10-14: While testing the NVIDIA GeForce RTX 50-series GPUs, we have encountered the following error. To fix this issue, we recommend that you upgrade your driver to version 12.8 or later and download the corresponding PyTorch build from the official PyTorch website.

XXX with CUDA capability sm_120 is not compatible with the current PyTorch installation. 
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90.
import os

# download torch==2.8.0+cu128 from pytorch.org
os.system(f"pip install torch==2.8.0+cu128 --index-url https://download.pytorch.org/whl/cu128")

# download torch-X (scatter, sparse, spline-conv, cluster)
html_link = f"https://pytorch-geometric.com/whl/torch-2.8.0+cu128.html"
os.system(f"pip install --no-index torch-scatter -f {html_link}")
os.system(f"pip install --no-index torch-sparse -f {html_link}")
os.system(f"pip install --no-index torch-spline-conv -f {html_link}")
os.system(f"pip install --no-index torch-cluster -f {html_link}")

# wandb
os.system(f"pip install wandb>=0.20.0")

# pytorch-lightning
os.system(f"pip install pytorch-lightning==2.5.3")

After the environment is installed, run the following command to confirm that the PyTorch build supports sm_120.

>>> import torch
>>> print(torch.cuda.get_arch_list())
['sm_70', 'sm_75', 'sm_80', 'sm_86', 'sm_90', 'sm_100', 'sm_120']

โš ๏ธ 2025-10-21: We find that on macOS, the gurobipy package does not support Python 3.8 or earlier. Therefore, please upgrade your Python to at least 3.9.

๐Ÿ“ ML4CO-Kit Development status

We will present the development progress of ML4CO-Kit in the above 5 levels.

Graph: MCl & MCut & MIS & MVC; Portfolio: MaxRetPO & MinVarPO & MOPO

โœ”: Supported; ๐Ÿ“†: Planned for future versions (contributions welcomed!).

Task (Level 1)
Task Definition Check Constraint Evaluation Render Special R/O
Routing Tasks
Asymmetric TSP (ATSP) โœ” โœ” โœ” ๐Ÿ“† tsplib
Capacitated Vehicle Routing Problem (CVRP) โœ” โœ” โœ” โœ” vrplib
Orienteering Problem (OP) โœ” โœ” โœ” ๐Ÿ“†
Prize Collection TSP (PCTSP) โœ” โœ” โœ” ๐Ÿ“†
Stochastic PCTSP (SPCTSP) โœ” โœ” โœ” ๐Ÿ“†
Traveling Salesman Problem (TSP) โœ” โœ” โœ” โœ” tsplib
Graph Tasks
Maximum Clique (MCl) โœ” โœ” โœ” โœ” gpickle, adj_matrix, networkx, csr
Maximum Cut (MCut) โœ” โœ” โœ” โœ” gpickle, adj_matrix, networkx, csr
Maximum Independent Set (MIS) โœ” โœ” โœ” โœ” gpickle, adj_matrix, networkx, csr
Minimum Vertex Cover (MVC) โœ” โœ” โœ” โœ” gpickle, adj_matrix, networkx, csr
Portfolio Tasks
Maximum Return Portfolio Optimization (MaxRetPO) โœ” โœ” โœ” ๐Ÿ“†
Minimum Variance Portfolio Optimization (MinVarPO) โœ” โœ” โœ” ๐Ÿ“†
Multi-Objective Portfolio Optimization (MOPO) โœ” โœ” โœ” ๐Ÿ“†

Generator (Level 2)
Task Distribution Brief Intro. State
Routing Tasks
ATSP Uniform Random distance matrix with triangle inequality โœ”
SAT SAT problem transformed to ATSP โœ”
HCP Hamiltonian Cycle Problem transformed to ATSP โœ”
CVRP Uniform Random coordinates with uniform distribution โœ”
Gaussian Random coordinates with Gaussian distribution โœ”
OP Uniform Random prizes with uniform distribution โœ”
Constant All prizes are constant โœ”
Distance Prizes based on distance from depot โœ”
PCTSP Uniform Random prizes with uniform distribution โœ”
SPCTSP Uniform Random prizes with uniform distribution โœ”
TSP Uniform Random coordinates with uniform distribution โœ”
Gaussian Random coordinates with Gaussian distribution โœ”
Cluster Coordinates clustered around random centers โœ”
Graph Tasks
(Graph) ER (structure) Erdos-Renyi random graph โœ”
BA (structure) Barabasi-Albert scale-free graph โœ”
HK (structure) Holme-Kim small-world graph โœ”
WS (structure) Watts-Strogatz small-world graph โœ”
RB (structure) RB-Model graph โœ”
Uniform (weighted) Weights with Uniform distribution โœ”
Gaussian (weighted) Weights with Gaussian distribution โœ”
Poisson (weighted) Weights with Poisson distribution โœ”
Exponential (weighted) Weights with Exponential distribution โœ”
Lognormal (weighted) Weights with Lognormal distribution โœ”
Powerlaw (weighted) Weights with Powerlaw distribution โœ”
Binomial (weighted) Weights with Binomial distribution โœ”
Portfolio Tasks
(Portfolio) GBM Geometric Brownian Motion model โœ”
Factor Factor model with k factors and idiosyncratic noise โœ”
VAR(1) Vector Autoregressive model of order 1 โœ”
MVT Multivariate T distribution model โœ”
GRACH GARCH model for volatility clustering โœ”
Jump Merton Jump-Diffusion model โœ”
Regime Regime-Switching model with multiple states โœ”

Solver (Level 3)
Solver Support Task Language Source Ref. / Implementation State
BeamSolver MCl Python ML4CO-Kit ML4CO-Kit โœ”
MIS Python ML4CO-Kit ML4CO-Kit โœ”
ConcordeSolver TSP C/C++ Concorde PyConcorde โœ”
GAEAXSolver TSP C/C++ GA-EAX GA-EAX โœ”
GpDegreeSolver MCl Python ML4CO-Kit ML4CO-Kit โœ”
MIS Python ML4CO-Kit ML4CO-Kit โœ”
MVC Python ML4CO-Kit ML4CO-Kit โœ”
GreedySolver ATSP C/C++ ML4CO-Kit ML4CO-Kit โœ”
CVRP Python ML4CO-Kit ML4CO-Kit โœ”
TSP Cython DIFUSCO DIFUSCO โœ”
MCl Python ML4CO-Kit ML4CO-Kit โœ”
MCut Python ML4CO-Kit ML4CO-Kit โœ”
MIS Python ML4CO-Kit ML4CO-Kit โœ”
MVC Python ML4CO-Kit ML4CO-Kit โœ”
GurobiSolver ATSP C/C++ Gurobi ML4CO-Kit โœ”
CVRP C/C++ Gurobi ML4CO-Kit โœ”
OP C/C++ Gurobi ML4CO-Kit โœ”
TSP C/C++ Gurobi ML4CO-Kit โœ”
MCl C/C++ Gurobi DIffUCO โœ”
MCut C/C++ Gurobi DIffUCO โœ”
MIS C/C++ Gurobi DIffUCO โœ”
MVC C/C++ Gurobi DIffUCO โœ”
MaxRetPO C/C++ Gurobi ML4CO-Kit โœ”
MinVarPO C/C++ Gurobi ML4CO-Kit โœ”
MOPO C/C++ Gurobi ML4CO-Kit โœ”
HGSSolver CVRP C/C++ HGS-CVRP HGS-CVRP โœ”
ILSSolver PCTSP Python PCTSP PCTSP โœ”
SPCTSP Python Attention Attention โœ”
InsertionSolver TSP Python GLOP GLOP โœ”
KaMISSolver MIS Python KaMIS MIS-Bench โœ”
LcDegreeSolver MCl Python ML4CO-Kit ML4CO-Kit โœ”
MCut Python ML4CO-Kit ML4CO-Kit โœ”
MIS Python ML4CO-Kit ML4CO-Kit โœ”
MVC Python ML4CO-Kit ML4CO-Kit โœ”
LKHSolver TSP C/C++ LKH ML4CO-Kit โœ”
ATSP C/C++ LKH ML4CO-Kit โœ”
CVRP C/C++ LKH ML4CO-Kit โœ”
MCTSSolver TSP Python Att-GCRN ML4CO-Kit โœ”
NeuroLKHSolver TSP Python NeuroLKH ML4CO-Kit โœ”
ORSolver ATSP C/C++ OR-Tools ML4CO-Kit โœ”
OP C/C++ OR-Tools ML4CO-Kit โœ”
PCTSP C/C++ OR-Tools ML4CO-Kit โœ”
TSP C/C++ OR-Tools ML4CO-Kit โœ”
MCl C/C++ OR-Tools ML4CO-Kit โœ”
MIS C/C++ OR-Tools ML4CO-Kit โœ”
MVC C/C++ OR-Tools ML4CO-Kit โœ”
RLSASolver MCl Python RLSA ML4CO-Kit โœ”
MCut Python RLSA ML4CO-Kit โœ”
MIS Python RLSA ML4CO-Kit โœ”
MVC Python RLSA ML4CO-Kit โœ”
SCIPSolver MaxRetPO C/C++ PySCIPOpt ML4CO-Kit โœ”
MinVarPO C/C++ PySCIPOpt ML4CO-Kit โœ”
MOPO C/C++ PySCIPOpt ML4CO-Kit โœ”

Optimizer (Level 4)
Optimizer Support Task Language Source Reference State
CVRPLSOptimizer CVRP C/C++ HGS-CVRP ML4CO-Kit โœ”
MCTSOptimizer TSP C/C++ Att-GCRN ML4CO-Kit โœ”
RLSAOptimizer MCl Python RLSA ML4CO-Kit โœ”
MCut Python RLSA ML4CO-Kit โœ”
MIS Python RLSA ML4CO-Kit โœ”
MVC Python RLSA ML4CO-Kit โœ”
TwoOptOptimizer ATSP C/C++ ML4CO-Kit ML4CO-Kit โœ”
TSP Python DIFUSCO ML4CO-Kit โœ”

Wrapper (Level 5)
Wrapper TXT Other R&W
Routing Tasks
ATSPWrapper "[dists] output [sol]" tsplib
CVRPWrapper "depots [depots] points [points] demands [demands] capacity [capacity] output [sol]" vrplib
ORWrapper "depots [depots] points [points] prizes [prizes] max_length [max_length] output [sol]"
PCTSPWrapper "depots [depots] points [points] penalties [penalties] prizes [prizes] required_prize [required_prize] output [sol]"
SPCTSPWrapper "depots [depots] points [points] penalties [penalties] expected_prizes [expected_prizes] actual_prizes [actual_prizes] required_prize [required_prize] output [sol]"
TSPWrapper "[points] output [sol]" tsplib
Graph Tasks
(Graph)Wrapper "[edge_index] label [sol]" gpickle
(Graph)Wrapper [weighted] "[edge_index] weights [weights] label [sol]" gpickle
Portfolio Tasks
MaxRetPOWrapper "[returns] cov [cov] max_var [max_var] output [sol]"
MinVarPOWrapper "[returns] cov [cov] required_returns [required_returns] output [sol]"
MOPOWrapper "[returns] cov [cov] var_factor [var_factor] output [sol]"

๐Ÿ”Ž How to use ML4CO-Kit

Case-01: How to use ML4CO-Kit to generate a dataset
# We take the TSP as an example

# Import the required classes.
>>> import numpy as np                  # Numpy
>>> from ml4co_kit import TSPWrapper    # The wrapper for TSP, used to manage data and parallel generation.
>>> from ml4co_kit import TSPGenerator  # The generator for TSP, used to generate a single instance.
>>> from ml4co_kit import TSP_TYPE      # The distribution types supported by the generator.
>>> from ml4co_kit import LKHSolver     # We choose LKHSolver to solve TSP instances

# Check which distributions are supported by the TSP types.
>>> for type in TSP_TYPE:
...     print(type)
TSP_TYPE.UNIFORM
TSP_TYPE.GAUSSIAN
TSP_TYPE.CLUSTER

# Set the generator parameters according to the requirements.
>>> tsp_generator = TSPGenerator(
...     distribution_type=TSP_TYPE.GAUSSIAN,   # Generate a TSP instance with a Gaussian distribution
...     precision=np.float32,                  # Floating-point precision: 32-bit
...     nodes_num=50,                          # Number of nodes in TSP instance
...     gaussian_mean_x=0,                     # Mean of Gaussian for x coordinate
...     gaussian_mean_y=0,                     # Mean of Gaussian for y coordinate
...     gaussian_std=1,                        # Standard deviation of Gaussian
... )

# Set the LKH parameters.
>>> tsp_solver = LKHSolver(
...     lkh_scale=1e6,        # Scaling factor to convert floating-point numbers to integers
...     lkh_max_trials=500,   # Maximum number of trials for the LKH algorithm
...     lkh_path="LKH",       # Path to the LKH executable
...     lkh_runs=1,           # Number of runs for the LKH algorithm
...     lkh_seed=1234,        # Random seed for the LKH algorithm
...     lkh_special=False,    # When set to True, disables 2-opt and 3-opt heuristics
... )

# Create the TSP wrapper
>>> tsp_wrapper = TSPWrapper(precision=np.float32)

# Use ``generate_w_to_txt`` to generate a dataset of TSP.
>>> tsp_wrapper.generate_w_to_txt(
...     file_path="tsp_gaussian_16ins.txt",  # Path to the output file where the generated TSP instances will be saved
...     generator=tsp_generator,             # The TSP instance generator to use
...     solver=tsp_solver,                   # The TSP solver to use
...     num_samples=16,                      # Number of TSP instances to generate
...     num_threads=4,                       # Number of CPU threads to use for parallelization; cannot both be non-1 with batch_size
...     batch_size=1,                        # Batch size for parallel processing; cannot both be non-1 with num_threads
...     write_per_iters=1,                   # Number of sub-generation steps after which data will be written to the file
...     write_mode="a",                      # Write mode for the output file ("a" for append)
...     show_time=True,                      # Whether to display the time taken for the generation process
... )
Generating TSP: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4/4 [00:00<00:00, 12.79it/s]

Case-02: How to use ML4CO-Kit to load problems and solve them
# We take the MIS as an example

# Import the required classes.
>>> import numpy as np                  # Numpy
>>> from ml4co_kit import MISWrapper    # The wrapper for MIS, used to manage data and parallel solving.
>>> from ml4co_kit import KaMISSolver   # We choose KaMISSolver to solve MIS instances

# Set the KaMIS parameters.
>>> mis_solver = KaMISSolver(
...     kamis_time_limit=10.0,          # The maximum solution time for a single problem
...     kamis_weighted_scale=1e5,       # Weight scaling factor, used when nodes have weights.
... )

# Create the MIS wrapper
>>> mis_wrapper = MISWrapper(precision=np.float32)

# Load the problems to be solved.
# You can use the corresponding loading function based on the file type, 
# such as ``from_txt`` for txt file and ``from_pickle`` for pickle file.
>>> mis_wrapper.from_txt(
...     file_path="test_dataset/mis/wrapper/mis_rb-small_uniform-weighted_4ins.txt",
...     ref=True,          # TXT file contains labels. Set ``ref=True`` to set them as reference.
...     overwrite=True,    # Whether to overwrite the data. If not, only update according to the file data.
...     show_time=True     # Whether to display the time taken for the loading process
... )
Loading data from test_dataset/mis/wrapper/mis_rb-small_uniform-weighted_4ins.txt: 4it [00:00, 75.41it/s]

# Use ``solve`` to call the KaMISSolver to perform the solution.
>>> mis_wrapper.solve(
...     solver=mis_solver,                   # The solver to use
...     num_threads=2,                       # Number of CPU threads to use for parallelization; cannot both be non-1 with batch_size
...     batch_size=1,                        # Batch size for parallel processing; cannot both be non-1 with num_threads
...     show_time=True,                      # Whether to display the time taken for the generation process
... )
Solving MIS Using kamis: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:21<00:00, 10.97s/it]
Using Time: 21.947036743164062

# Use ``evaluate_w_gap`` to obtain the evaluation results.
# Evaluation Results: average solution value, average reference value, gap (%), gap std.
>>> eval_result = mis_wrapper.evaluate_w_gap()
>>> print(eval_result)
(14.827162742614746, 15.18349838256836, 2.5054726600646973, 2.5342845916748047)

Case-03: How to use ML4CO-Kit to visualize the COPs
# We take the CVRP as an example

# Import the required classes.
>>> import numpy as np                  # Numpy
>>> from ml4co_kit import CVRPTask      # CVRP Task. 
>>> from ml4co_kit import CVRPWrapper   # The wrapper for CVRP, used to manage data.

# Case-1: multiple task data are saved in ``txt``, ``pickle``, etc. single task data is saved in pickle.
>>> cvrp_wrapper = CVRPWrapper()
>>> cvrp_wrapper.from_pickle("test_dataset/cvrp/wrapper/cvrp50_uniform_16ins.pkl")
>>> cvrp_task = cvrp_wrapper.task_list[0]
>>> print(cvrp_task)
CVRPTask(2fb389cdafdb4e79a94572f01edf0b95)

# Case-2: single task data is saved in pickle.
>>> cvrp_task = CVRPTask()
>>> cvrp_task.from_pickle("test_dataset/cvrp/task/cvrp50_uniform_task.pkl")
>>> print(cvrp_task)
CVRPTask(2fb389cdafdb4e79a94572f01edf0b95)

# The loaded solution is usually a reference solution. 
# When drawing the image, it is the ``sol`` that is being drawn. 
# Therefore, it is necessary to assign ``ref_sol`` to ``sol``.
>>> cvrp_task.sol = cvrp_task.ref_sol

# Using ``render`` to get the visualization
>>> cvrp_task.render(
...     save_path="./docs/assets/cvrp_solution.png",  # Path to save the rendered image
...     with_sol=True,                                # Whether to draw the solution tour
...     figsize=(10, 10),                             # Size of the image (width and height)
...     node_color="darkblue",                        # Color of the nodes
...     edge_color="darkblue",                        # Color of the edges
...     node_size=50                                  # Size of the nodes
... )
pip

Case-04: A simple ML4CO example
# We take the MCut as an example

# Import the required classes.
>>> import numpy as np                   # Numpy
>>> from ml4co_kit import MCutWrapper    # The wrapper for MCutWrapper, used to manage data.
>>> from ml4co_kit import GreedySolver   # GreedySolver, based on GNN4CO.
>>> from ml4co_kit import RLSAOptimizer  # Using RLSA to perform local search.
>>> from ml4co_kit.extension.gnn4co import GNN4COModel, GNN4COEnv, GNNEncoder

# Set the GNN4COModel parameters. ``weight_path``: Pretrain weight path. 
# If it is not available locally, it will be automatically downloaded from Hugging Face.
>>> gnn4mcut_model = GNN4COModel(
...     env=GNN4COEnv(
...         task="MCut",              # Task name: MCut.                                 
...         mode="solve",             # Mode: solving mode.
...         sparse_factor=1,          # Sparse factor: Controls the sparsity of the graph.
...         device="cuda"             # Device: 'cuda' or 'cpu'
...     ),
...     encoder=GNNEncoder(
...         task="MCut",              # Task name: MCut.
...         sparse=True,              # Graph data should set ``sparse`` to True.
...         block_layers=[2,4,4,2]    # Block layers: the number of layers in each block of the encoder.
...     ),
...     weight_path="weights/gnn4co_mcut_ba-large_sparse.pt"   
... )
gnn4co/gnn4co_mcut_ba-large_sparse.pt:โ€‡100%โ€‡โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ 19.6M/19.6Mโ€‡[00:03<00:00,โ€‡6.18MB/s]

# Set the RLSAOptimizer parameters.
>>> mcut_optimizer = RLSAOptimizer(
...     rlsa_kth_dim="both",          # Which dimension to consider for the k-th value calculation.
...     rlsa_tau=0.01,                # The temperature parameter in the Simulated Annealing process.
...     rlsa_d=2,                     # Control the step size of each update.
...     rlsa_k=1000,                  # The number of samples used in the optimization process.
...     rlsa_t=1000,                  # The number of iterations in the optimization process.
...     rlsa_device="cuda",           # Device: 'cuda' or 'cpu'.
...     rlsa_seed=1234                # The random seed for reproducibility.
... )

# Set the GreedySolver parameters.
>>> mcut_solver_wo_opt = GreedySolver(
...     model=gnn4mcut_model,         # GNN4CO model for MCut
...     device="cuda",                # Device: 'cuda' or 'cpu'.
...     optimizer=None                # The optimizer to perform local search.
... )
>>> mcut_solver_w_opt = GreedySolver(
...     model=gnn4mcut_model,         # GNN4CO model for MCut
...     device="cuda",                # Device: 'cuda' or 'cpu'.
...     optimizer=mcut_optimizer      # The optimizer to perform local search.
... )

# Create the MCut wrapper
>>> mcut_wrapper = MCutWrapper(precision=np.float32)

# Load the problems to be solved.
# You can use the corresponding loading function based on the file type, 
# such as ``from_txt`` for txt file and ``from_pickle`` for pickle file.
>>> mcut_wrapper.from_txt(
...     file_path="test_dataset/mcut/wrapper/mcut_ba-large_no-weighted_4ins.txt",
...     ref=True,          # TXT file contains labels. Set ``ref=True`` to set them as reference.
...     overwrite=True,    # Whether to overwrite the data. If not, only update according to the file data.
...     show_time=True     # Whether to display the time taken for the loading process
... )
Loading data from test_dataset/mcut/wrapper/mcut_ba-large_no-weighted_4ins.txt: 4it [00:00, 16.35it/s]

# Using ``solve`` to get the solution (without optimizer)
>>> mcut_wrapper.solve(
...     solver=mcut_solver_wo_opt,    # The solver to use
...     num_threads=1,                # Number of CPU threads to use for parallelization; cannot both be non-1 with batch_size
...     batch_size=1,                 # Batch size for parallel processing; cannot both be non-1 with num_threads
...     show_time=True,               # Whether to display the time taken for the generation process
... )
Solving MCut Using greedy: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4/4 [00:00<00:00, 12.34it/s]
Using Time: 0.3261079788208008

# Use ``evaluate_w_gap`` to obtain the evaluation results.
# Evaluation Results: average solution value, average reference value, gap (%), gap std.
>>> eval_result = mcut_wrapper.evaluate_w_gap()
>>> print(eval_result)
(2647.25, 2726.5, 2.838811523236064, 0.7528157058230817)

# Using ``solve`` to get the solution (with optimizer)
>>> mcut_wrapper.solve(
...     solver=mcut_solver_w_opt,     # The solver to use
...     num_threads=1,                # Number of CPU threads to use for parallelization; cannot both be non-1 with batch_size
...     batch_size=1,                 # Batch size for parallel processing; cannot both be non-1 with num_threads
...     show_time=True,               # Whether to display the time taken for the generation process
... )
Solving MCut Using greedy: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4/4 [00:02<00:00,  1.46it/s]
Using Time: 2.738525867462158

# Use ``evaluate_w_gap`` to obtain the evaluation results.
# Evaluation Results: average solution value, average reference value, gap (%), gap std.
>>> eval_result = mcut_wrapper.evaluate_w_gap()
>>> print(eval_result)
(2693.0, 2726.5, 1.2373146256952277, 0.29320238806274546)

๐Ÿ“ˆ Our Systematic Benchmark Works

We are systematically building a foundational framework for ML4CO with a collection of resources that complement each other in a cohesive manner.

  • Awesome-ML4CO, a curated collection of literature in the ML4CO field, organized to support researchers in accessing both foundational and recent developments.

  • ML4CO-Kit, a general-purpose toolkit that provides implementations of common algorithms used in ML4CO, along with basic training frameworks, traditional solvers and data generation tools. It aims to simplify the implementation of key techniques and offer a solid base for developing machine learning models for COPs.

  • ML4TSPBench: a benchmark focusing on exploring the TSP for representativeness. It advances a unified modular streamline incorporating existing tens of technologies in both learning and search for transparent ablation, aiming to reassess the role of learning and to discern which parts of existing techniques are genuinely beneficial and which are not. It offers a deep dive into various methodology designs, enabling comparisons and the development of specialized algorithms.

  • ML4CO-Bench-101: a benchmark that categorizes neural combinatorial optimization (NCO) solvers by solving paradigms, model designs, and learning strategies. It evaluates applicability and generalization of different NCO approaches across a broad range of combinatorial optimization problems to uncover universal insights that can be transferred across various domains of ML4CO.

  • PredictiveCO-Benchmark: a benchmark for decision-focused learning (DFL) approaches on predictive combinatorial optimization problems.

โœจ Citation

If you find our code helpful in your research, please cite

@inproceedings{
    ma2025mlcobench,
    title={ML4CO-Bench-101: Benchmark Machine Learning for Classic Combinatorial Problems on Graphs},
    author={Jiale Ma and Wenzheng Pan and Yang Li and Junchi Yan},
    booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
    year={2025},
    url={https://openreview.net/forum?id=ye4ntB1Kzi}
}

@inproceedings{li2025unify,
  title={Unify ml4tsp: Drawing methodological principles for tsp and beyond from streamlined design space of learning and search},
  author={Li, Yang and Ma, Jiale and Pan, Wenzheng and Wang, Runzhong and Geng, Haoyu and Yang, Nianzu and Yan, Junchi},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ml4co_kit-0.4.2.tar.gz (4.2 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

ml4co_kit-0.4.2-cp313-cp313-manylinux2014_x86_64.whl (9.6 MB view details)

Uploaded CPython 3.13

ml4co_kit-0.4.2-cp313-cp313-macosx_15_0_universal2.whl (6.9 MB view details)

Uploaded CPython 3.13macOS 15.0+ universal2 (ARM64, x86-64)

ml4co_kit-0.4.2-cp312-cp312-manylinux2014_x86_64.whl (9.7 MB view details)

Uploaded CPython 3.12

ml4co_kit-0.4.2-cp312-cp312-macosx_15_0_universal2.whl (6.9 MB view details)

Uploaded CPython 3.12macOS 15.0+ universal2 (ARM64, x86-64)

ml4co_kit-0.4.2-cp311-cp311-manylinux2014_x86_64.whl (9.7 MB view details)

Uploaded CPython 3.11

ml4co_kit-0.4.2-cp311-cp311-macosx_15_0_universal2.whl (6.9 MB view details)

Uploaded CPython 3.11macOS 15.0+ universal2 (ARM64, x86-64)

ml4co_kit-0.4.2-cp310-cp310-manylinux2014_x86_64.whl (9.5 MB view details)

Uploaded CPython 3.10

ml4co_kit-0.4.2-cp310-cp310-macosx_15_0_universal2.whl (6.8 MB view details)

Uploaded CPython 3.10macOS 15.0+ universal2 (ARM64, x86-64)

ml4co_kit-0.4.2-cp39-cp39-manylinux2014_x86_64.whl (9.5 MB view details)

Uploaded CPython 3.9

ml4co_kit-0.4.2-cp39-cp39-macosx_15_0_universal2.whl (6.8 MB view details)

Uploaded CPython 3.9macOS 15.0+ universal2 (ARM64, x86-64)

ml4co_kit-0.4.2-cp38-cp38-manylinux2014_x86_64.whl (9.5 MB view details)

Uploaded CPython 3.8

File details

Details for the file ml4co_kit-0.4.2.tar.gz.

File metadata

  • Download URL: ml4co_kit-0.4.2.tar.gz
  • Upload date:
  • Size: 4.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.18

File hashes

Hashes for ml4co_kit-0.4.2.tar.gz
Algorithm Hash digest
SHA256 687e3380abfdc13939e8247b234146e09cda708b68b5bd9563fd68f65bb5e700
MD5 2263a5edd3db7ec263671b64b9a3e0eb
BLAKE2b-256 83b953652f01bb06e58aa344467994b342421c95f8e5c711b8f51cbbcd9cdf31

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.4.2-cp313-cp313-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.4.2-cp313-cp313-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 2e5250381a522b65653b295a473f65cc987425a1649a4569de58cfea88973a4b
MD5 a49cb50f7158b4fad4c1868bda3a5f60
BLAKE2b-256 e35535f5de7099d01f2fc3efc32268e3535127eb9685e0e178dd3f5f98cfea98

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.4.2-cp313-cp313-macosx_15_0_universal2.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.4.2-cp313-cp313-macosx_15_0_universal2.whl
Algorithm Hash digest
SHA256 fa137a14d8da87400867b3cf40edfc6c87433fa8cd7a25c9691a4d46da412dab
MD5 ffefcf2c277ccdf5d37fa9162db7578b
BLAKE2b-256 4acca64dd29df73024f1af95ad56e26d163a1c2b682121c3254788aaeeb1bdfd

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.4.2-cp312-cp312-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.4.2-cp312-cp312-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 2a1f21a7dc89275b600508430caba1f335d2a932a0d837e59c15f956d6c685ee
MD5 b89d53bd929aa83e62490c34edb67135
BLAKE2b-256 ff8e114013c73d6feb409f7110e02148f1fe3ff94767252e0d6f4b595d19ed07

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.4.2-cp312-cp312-macosx_15_0_universal2.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.4.2-cp312-cp312-macosx_15_0_universal2.whl
Algorithm Hash digest
SHA256 dfe575e1bef3ba8b3358dd9757dc65c67d3cac9f7537887bfba57dbf50cf0acb
MD5 226890b91c8b2905080cfb249c591f8f
BLAKE2b-256 ad3150c6f39224c49e88abf1e0844dfa584b3ddad6c15228e2e39c52e63fa81a

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.4.2-cp311-cp311-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.4.2-cp311-cp311-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 137d41b37e58b3d15f05b36c725ecb393b5ff59a28e0d4af7f8eb50ebc542656
MD5 dc582c14141172aacc6db6135e395afa
BLAKE2b-256 3d76cce9195e950a2fc244ac4e82594f0105c5ec437475b438beace45fb310f4

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.4.2-cp311-cp311-macosx_15_0_universal2.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.4.2-cp311-cp311-macosx_15_0_universal2.whl
Algorithm Hash digest
SHA256 16f93ee2da438012e23bb238e9c8e9d477759d4109eec1c16b5902e192ba07d3
MD5 f31198df50c6b5356498eb6085769fc5
BLAKE2b-256 186a23c6748b305432553048bc90a2e8a7b04452b76aa60d7206c7188b554a8c

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.4.2-cp310-cp310-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.4.2-cp310-cp310-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 9670a76b2fc5e6c657aeb3e0c183f34b3dee69e5957fceac5828a2a91a680588
MD5 0246c29b487c77f605a393c225f74f12
BLAKE2b-256 b64df2174dd9a04f24558cc730c48a49b0dc70deccee93e95ca71bb30c27a296

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.4.2-cp310-cp310-macosx_15_0_universal2.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.4.2-cp310-cp310-macosx_15_0_universal2.whl
Algorithm Hash digest
SHA256 4baa54787764bd59aee061ea5378e1dfb5477b5670dae87ca5cd535f28434e44
MD5 ef55877433111394a5dd8dec645300f0
BLAKE2b-256 011cbaec32b9b0b4a3f352d00767ffa58a4917a06efbcd383969621e48eb400b

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.4.2-cp39-cp39-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.4.2-cp39-cp39-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 41500382291ff7022f1e7355ec365d041dbf4ddf6b5ce32ee522e7c977d3ae93
MD5 2c29b1d519c55bfd23f85bff7b0e0cd8
BLAKE2b-256 0e1730430fb53988b900fd4ce270e0fd245f8f09dd7501283976fbbcf492c96b

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.4.2-cp39-cp39-macosx_15_0_universal2.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.4.2-cp39-cp39-macosx_15_0_universal2.whl
Algorithm Hash digest
SHA256 30bd3a8bc54fb81306e2b4abdde94d143bbb866e91f5821c981562d1fe6c1cba
MD5 2d9be2c461915d57ad21cfad6a99554a
BLAKE2b-256 2b5eecf2c323f543dfc5d7a7a37fb36dd00de96741f92f40f31ebded89b7b7a4

See more details on using hashes here.

File details

Details for the file ml4co_kit-0.4.2-cp38-cp38-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ml4co_kit-0.4.2-cp38-cp38-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 561e196d393a30c0f4813783374c26e901613b61d3b93ab8da88b237468c91e4
MD5 4dd191ecb206ca0f1eea37971c412179
BLAKE2b-256 8264016fc71d1f9a4d1de536fe490c6031f36e1fc0e974b1bb164f04d57118c4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page