pypop7 (A Pure-PYthon library of POPulation-based OPtimization)
Project description
pypop (Pure-PYthon library of POPulation-based OPtimization)
PyPop
is a Pure-PYthon library of POPulation-based OPtimization for single-objective, real-parameter, black-box problems (currently actively developed). Its goal is to provide a unified interface and also elegant implementations for Derivative-Free Optimization (DFO), particularly population-based optimizers, in order to facilitate research repeatability and also real-world applications.
For alleviating the notorious curse of dimensionality of DFO (based on iterative sampling), the main focus of PyPop
is to cover their State-Of-The-Art implementations for Large-Scale Optimization (LSO), though their other versions and variants may be also included here (e.g., for benchmarking purpose, for mixing purpose, and sometimes even for practical purpose).
A (Still Growing) List of Publicly Available Gradient-Free Optimizers (GFO)
: indicates the specific version for LSO (e.g., dimension >= 1000).
: indicates the competitive (or de facto) version for relatively low-dimensional problems (though it may also work well under certain LSO circumstances).
: indicates the baseline version for benchmarking purpose or for theoretical interest.
-
Evolution Strategies (ES) [See e.g. Ollivier et al., 2017, JMLR; Hansen et al., 2015; Bäck et al., 2013; Rudolph, 2012; Beyer&Schwefel, 2002; Rechenberg, 1989; Schwefel, 1984]
-
Mixture Model-based Evolution Strategy (MMES) [See He et al., 2021, TEVC]
-
Limited-Memory Matrix Adaptation Evolution Strategy (LMMAES, LM-MA-ES) [See Loshchilov et al., 2019, TEVC]
-
Rank-m Evolution Strategy with Multiple Evolution Paths (RMES, Rm-ES) [See Li&Zhang, 2018, TEVC]
-
Rank-One Evolution Strategy (R1ES, R1-ES) [See Li&Zhang, 2018, TEVC]
-
Linear Covariance Matrix Adaptation (VDCMA, VD-CMA) [See Akimoto et al., 2014, GECCO]
-
Separable Covariance Matrix Adaptation Evolution Strategy (SEPCMAES, sep-CMA-ES) [See Bäck et al., 2013; Ros&Hansen, 2008, PPSN]
-
Diagonal Decoding Covariance Matrix Adaptation (DDCMA, dd-CMA) [See Akimoto&Hansen, 2019, ECJ]
-
Covariance Matrix Self-Adaptation with Repelling Subpopulations (RSCMSA, RS-CMSA) [See Ahrari et al., 2017, ECJ]
-
Matrix Adaptation Evolution Strategy (MAES, (μ/μ_w,λ)-MA-ES) [See Beyer&Sendhoff, 2017, TEVC]
- Fast Matrix Adaptation Evolution Strategy (FMAES, Fast-(μ/μ_w,λ)-MA-ES) [See Beyer, 2020, GECCO; Loshchilov et al., 2019, TEVC]
-
Self-Adaptation Evolution Strategy (SAES, (μ/μ_I, λ)-σSA-ES) [See e.g. Beyer, 2020, GECCO; Beyer, 2007, Scholarpedia]
-
Cumulative Step-size Adaptation Evolution Strategy (CSAES, (μ/μ,λ)-ES) [See e.g. Hansen et al., 2015; Ostermeier et al., 1994, PPSN]
-
Derandomized Self-Adaptation Evolution Strategy (DSAES, (1,λ)-σSA-ES) [See e.g. Hansen et al., 2015; Ostermeier et al., 1994, ECJ]
-
Schwefel's Self-Adaptation Evolution Strategy (SSAES, (μ/μ,λ)-σSA-ES) [See e.g. Hansen et al., 2015]
-
Rechenberg's (1+1)-Evolution Strategy with 1/5th success rule (RES) [See e.g. Hansen et al., 2015; Kern et al., 2004; Schumer&Steiglitz, 1968, IEEE-TAC]
-
-
-
Natural Evolution Strategies (NES) [See e.g. Wierstra et al., 2014, JMLR; Yi et al., 2009, ICML; Wierstra et al., 2008, CEC]
-
Rank-One Natural Evolution Strategy (R1NES) [See Sun et al., 2013, GECCO]
-
Separable Natural Evolution Strategy (SNES) [See Schaul et al., 2011, GECCO]
-
-
Estimation of Distribution Algorithms (EDA) [See e.g. Larrañaga&Lozano, 2002; Pelikan et al., 2002; Mühlenbein&Paaß, 1996, PPSN]
-
Cross-Entropy Method (CEM) [See e.g. Rubinstein&Kroese, 2004]
-
Particle Swarm Optimization (PSO) [See e.g. Kennedy and Eberhart, 1995, ICNN]
-
CoOperative co-Evolutionary Algorithms (COEA) [See e.g. Gomez et al., 2008, JMLR; Panait et al., 2008, JMLR]
- CoOperative SYnapse NeuroEvolution (COSYNE, CoSyNE) [See Gomez et al., 2008, JMLR]
-
Simulated Annealing (SA) [See e.g. Kirkpatrick et al., 1983, Science; Hastings, 1970, Biometrika; Metropolis et al., 1953, JCP]
-
Enhanced Simulated Annealing (ESA) [See Siarry et al., 1997, ACM-TOMS]
-
Corana et al.' Simulated Annealing (CSA) [See Corana et al., 1987, ACM-TOMS]
-
-
Genetic Algorithms (GA) [See e.g. Forrest, 1993, Science; Holland, 1962, JACM]
-
Evolutionary Programming (EP) [See e.g. Yao et al., 1999, TEVC]
-
Differential Evolution (DE) [See e.g. Storn&Price, 1997, JGO]
-
Direct Search (DS) [See e.g. Wright, 1996]
- Nelder-Mead Simplex Method (NelderMead) [See Nelder&Mead, 1965, Computer]
-
Random (Stochastic) Search (RS) [See e.g. Rastrigin, 1986; Brooks, 1958, Operations Research]
-
Pure Random Search (PRS) [See e.g. Bergstra and Bengio, 2012, JMLR]
-
Random Hill Climber (RHC) [See e.g. Schaul et al., 2010, JMLR]
-
Annealed Random Hill Climber (ARHC) [See e.g. Schaul et al., 2010, JMLR]
-
Design Philosophy
-
Respect for Beauty (Elegance)
-
From the problem-solving perspective, we prefer to choose the (empirically) best optimizer for the given black-box problem. However, for the new problem, the (empirically) best optimizer is often unknown in advance (if without a prior knowledge). As a rule of thumb, we need to compare a (often small) set of all available/known optimizers and choose the best one from them according to some performance criteria. From the research perspective, however, we prefer to beautiful optimizers, though always keeping the “No Free Lunch” theorem in mind. Typically, the beauty of one optimizer comes from the following features: novelty (e.g., GA and PSO), competitive performance (e.g., on at least one class of problems), theoretical insights (e.g., NES/CMA-ES), clarity/simplicity (e.g., ease to understand and implement), and so on.
-
If you find any DFO to meet the above standard, welcome to launch issues or pulls. We will consider it to be included in the
pypop
library. Note that any superficial imitation to the above well-established optimizers ('Old Wine in a New Bottle') will be NOT considered.
-
-
Respect for Diversity
- Given the universality of black-box optimization (BBO) in science and engineering, different research communities have designed different methods and continue to increase. On the one hand, some of these methods may share more or less similarities. On the other hand, they may also show significant differences (w.r.t. motivations / objectives / implementations / practitioners). Therefore, we hope to cover such a diversity from different research communities such as artificial intelligence (particularly machine learning (evolutionary computation and zeroth-order optimization)), mathematical optimization/programming (particularly global optimization), operations research / management science, automatic control, open-source software, and perhaps others.
-
Respect for Originality
-
“It is both enjoyable and educational to hear the ideas directly from the creators”. (From Hennessy, J.L. and Patterson, D.A., 2019. Computer architecture: A quantitative approach (Sixth Edition). Elsevier.)
-
For each optimizer considered here, we expect to give its original/representative reference (including its good implementations/improvements). If you find some important reference missed here, please do NOT hesitate to contact us (we will be happy to add it if necessary).
-
-
Respect for Repeatability
- For randomized search, properly controlling randomness is very crucial to repeat numerical experiments. Here we follow the Random Sampling suggestions from NumPy. In other worlds, you must explicitly set the random seed for each optimizer.
Reference
-
Berahas, A.S., Cao, L., Choromanski, K. and Scheinberg, K., 2022. A theoretical and empirical comparison of gradient approximations in derivative-free optimization. Foundations of Computational Mathematics, 22(2), pp.507-560.
-
Larson, J., Menickelly, M. and Wild, S.M., 2019. Derivative-free optimization methods. Acta Numerica, 28, pp.287-404.
-
Ollivier, Y., Arnold, L., Auger, A. and Hansen, N., 2017. Information-geometric optimization algorithms: A unifying picture via invariance principles. Journal of Machine Learning Research, 18(18), pp.1-65.
-
Hansel, K., Moos, J. and Derstroff, C., 2021. Benchmarking the natural gradient in policy gradient methods and evolution strategies. Reinforcement Learning Algorithms: Analysis and Applications, pp.69-84.
-
Hansen, N., Arnold, D.V. and Auger, A., 2015. Evolution strategies. In Springer Handbook of Computational Intelligence (pp. 871-898). Springer, Berlin, Heidelberg.
-
Akimoto, Y., Auger, A. and Hansen, N., 2014, July. Comparison-based natural gradient optimization in high dimension. In Proceedings of Annual Conference on Genetic and Evolutionary Computation (pp. 373-380). ACM.
-
Hansen, N. and Auger, A., 2014. Principled design of continuous stochastic search: From theory to practice. In Theory and Principled Methods for the Design of Metaheuristics (pp. 145-180). Springer, Berlin, Heidelberg.
-
Akimoto, Y., Nagata, Y., Ono, I. and Kobayashi, S., 2012. Theoretical foundation for CMA-ES from information geometry perspective. Algorithmica, 64(4), pp.698-716.
-
Akimoto, Y., 2011. Design of evolutionary computation for continuous optimization. Doctoral Dissertation, Tokyo Institute of Technology.
-
-
Schaul, T., Bayer, J., Wierstra, D., Sun, Y., Felder, M., Sehnke, F., Rückstieß, T. and Schmidhuber, J., 2010. PyBrain. Journal of Machine Learning Research, 11(24), pp.743-746.
- Schaul, T., 2011. Studies in continuous black-box optimization. Doctoral Dissertation, Technische Universität München.
Research Support
This open-source Python library for black-box optimization is now supported by Shenzhen Fundamental Research Program under Grant No. JCYJ20200109141235597 (¥2,000,000), granted to Prof. Yuhui Shi (CSE, SUSTech @ Shenzhen, China), and actively developed (from 2021 to 2023) by his group members (e.g., Qiqi Duan, Chang Shao, Guochen Zhou, and Youkui Zhang).
We also acknowledge the initial discussions and Python code efforts (dating back to about 2017) with Hao Tong from the research group of Prof. Yao (CSE, SUSTech @ Shenzhen, China).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.