Moon: A small step of MOO, a big step for the human
Project description
Moon: A Standardized/Flexible Framework for MultiObjective OptimizatioN
'' I raise my cup to invite the moon. With my shadow we become three from one. '' -- Li Bai.
Moon: is a multiobjective optimization framework, from single-objective optimization to multiobjective optimization, towards a better understanding of optimization problems.
Do not release.
Main contributors: Xiaoyuan Zhang, Ji Cheng, Liao Zhang, Weiduo Liao, Xi Lin.
Corresponding to: Prof. Qingfu Zhang.
Advised by: Prof. Yifan Chen, Prof. Zhichao Lu, Prof. Ke Shang, Prof. Tao Qin.
This project has four important parts:
(1) A standardlized gradient based framework.
- Problem class. For more problem details, please also check the Readme_problem.md file. (1) For synthetic problems,
-
Problem Paper Project/Code ZDT paper project DTLZ [paper] project MAF paper [project] WFG code Real world problems. Y Fi's code Real world problems. Y RE paper code
(2) For multitask learning problems,
Problem | Paper | Project/Code |
---|---|---|
MO-MNISTs | PMTL | COSMOS |
Fairness Classification | COSMOS | COSMOS |
Federated Learning |
(3) For MORL problems,
Problem | Paper | Project/Code |
---|---|---|
Synthetic (DST FTS...) | Envelop | code |
Robotics (MO-MuJoCo...) | PGMORL | code |
-
Gradient-based Solver.
Method Property #Obj Support Published Complexity EPO code Exact solution. Any Y ICML 2020 $O(m^2 n K )$ COSMOS code Approximated exact solution. Any Y ICDM 2021 $O(m n K )$ MOO-SVGD code A set of diverse Pareto solution. Any Y NeurIPS 2021 $O(m^2 n K^2 )$ MGDA code Arbitray Pareto solutions. Location affected highly by initialization. Any Y NeurIPS 2018 $O(m^2 n K )$ PMTL code Pareto solutions in sectors. 2. 3 is difficult. Y NeurIPS 2019 $O(m^2 n K^2 )$ PMGDA Pareto solutions satisfying any preference. Any Y Under review $O(m^2 n K )$ GradienHV WangHao code It is a gradient-based HV method. 2/3 Y CEC 2023 $O(m^2 n K^2 )$ Aggregation fun. based, e.g. Tche,mTche,LS,PBI,... Pareto solution with aggregations. Any Y Here, $m$ is the number of objectives, $K$ is the number of samples, and $n$ is the number of decision variables. For neural network based methods, $n$ is the number of parameters; hence $n$ is very large (>10000), K is also large (e.g., 20-50), while $m$ is small (2.g., 2-4).
As a result, m^2 is not a big problem. n^2 is a big problem. K^2 is a big problem.
For running time consideration, . -1 T1.
MOO-SVGD is the slowest one.
EPO, MOO-SVGD, PMTL,
Current support: GradAggSolver, MGDASolver, EPOSolver, MOOSVGDSolver, GradHVSolver, PMTLSolver.
Important things to notice: The original code MOO-SVGD does not offer a MTL implement. Our code is the first open source code for MTL MOO-SVGD.
-
PSL solvers
- EPO-based
- Agg-based
- Hypernetwork-based
- ConditionalNet-based
- Simple PSL model
-
MOEA/D Current supported:
-
Vanilla MOEA/D
-
Will be released soon:
-
-
MOBO
-
ML pretrained methods.
- HV net.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.