pypop7 (Pure-PYthon library of POPulation-based OPtimization)
Project description
pypop7 (Pure-PYthon library of POPulation-based black-box OPtimization)
PyPop7
is a Pure-PYthon library of POPulation-based OPtimization for single-objective, real-parameter, black-box problems (currently actively developed). Its main goal is to provide a unified interface and elegant implementations for Derivative-Free Optimization (DFO), particularly population-based optimizers, in order to facilitate research repeatability and also real-world applications.
More specifically, for alleviating the notorious curse of dimensionality of DFO (based on iterative sampling), the primary focus of PyPop7
is to cover their State-Of-The-Art (SOTA) implementations for Large-Scale Optimization (LSO), though many of their other versions and variants are also included here (for benchmarking/mixing purpose, and sometimes even for practical purpose).
How to Use PyPop7
The following three simple steps are enough to utilize the optimization power of PyPop7:
$ pip install pypop7
For simplicity, all required dependencies are automatically installed according to setup.cfg.
-
Define your own objective function for the optimization problem at hand,
-
Run one or more black-box optimizers from
pypop7
on the given optimization problem:
import numpy as np # for numerical computation, which is also the computing engine of pypop7
# 2. Define your own objective function for the optimization problem at hand:
# the below example is Rosenbrock, the notorious test function in the optimization community
def rosenbrock(x):
return 100 * np.sum(np.power(x[1:] - np.power(x[:-1], 2), 2)) + np.sum(np.power(x[:-1] - 1, 2))
# define the fitness (cost) function and also its settings
ndim_problem = 1000
problem = {'fitness_function': rosenbrock, # cost function
'ndim_problem': ndim_problem, # dimension
'lower_boundary': -5 * np.ones((ndim_problem,)), # search boundary
'upper_boundary': 5 * np.ones((ndim_problem,))}
# 3. Run one or more black-box optimizers from ```pypop7``` on the given optimization problem:
# here we choose LM-MA-ES owing to its low complexity and metric-learning ability for LSO
from pypop7.optimizers.es.lmmaes import LMMAES
# define all the necessary algorithm options (which differ among different optimizers)
options = {'fitness_threshold': 1e-10, # terminate when the best-so-far fitness is lower than this threshold
'max_runtime': 3600, # 1 hours (terminate when the actual runtime exceeds it)
'seed_rng': 0, # seed of random number generation (which must be explicitly set for repeatability)
'x': 4 * np.ones((ndim_problem,)), # initial mean of search (mutation/sampling) distribution
'sigma': 0.3, # initial global step-size of search distribution
'verbose_frequency': 500}
lmmaes = LMMAES(problem, options) # initialize the optimizer
results = lmmaes.optimize() # run its (time-consuming) search process
print(results)
Below DEMOs are given on a toy 2-dimensional minimization function, in order to visually show the very interesting/powerful evolutionary search process of MAES
and LMCMAES
:
MA-ES | LM-CMA-ES |
---|---|
A (Still Growing) List of Publicly Available Gradient-Free Optimizers (GFO)
: indicates the specific version for LSO (e.g., dimension >= 1000).
: indicates the competitive (or de facto) version for relatively low-dimensional problems (though it may also work well under certain LSO circumstances).
: indicates the baseline version for benchmarking purpose or for theoretical interest.
-
Evolution Strategies (ES) [See e.g. Ollivier et al., 2017, JMLR; Hansen et al., 2015; Bäck et al., 2013; Rudolph, 2012; Beyer&Schwefel, 2002; Rechenberg, 1989; Schwefel, 1984]
-
Mixture Model-based Evolution Strategy (MMES) [See He et al., 2021, TEVC]
-
Limited-Memory Matrix Adaptation Evolution Strategy (LMMAES, LM-MA-ES) [See Loshchilov et al., 2019, TEVC]
-
Limited Memory Covariance Matrix Adaptation (LMCMA, LM-CMA) [See Loshchilov, 2017, ECJ]
- Limited Memory Covariance Matrix Adaptation Evolution Strategy (LMCMAES, LM-CMA-ES) [See Loshchilov, 2014, GECCO]
-
Rank-m Evolution Strategy with Multiple Evolution Paths (RMES, Rm-ES) [See Li&Zhang, 2018, TEVC]
-
Rank-One Evolution Strategy (R1ES, R1-ES) [See Li&Zhang, 2018, TEVC]
-
Projection-based Covariance Matrix Adaptation (VKDCMA, VkD-CMA) [See Akimoto&Hansen, 2016, GECCO]
-
Linear Covariance Matrix Adaptation (VDCMA, VD-CMA) [See Akimoto et al., 2014, GECCO]
-
Cholesky-CMA-ES-2016 (CCMAES2016) [See Krause et al., 2016, NeurIPS]
-
(1+1)-Active-Cholesky-CMA-ES-2015 (OPOA2015) [See Krause&Igel, 2015, FOGA]
-
(1+1)-Active-Cholesky-CMA-ES (OPOA) [See Arnold&Hansen, 2010, GECCO]
-
-
Cholesky-CMA-ES (CCMAES) [See Suttorp et al., 2009, MLJ]
-
(1+1)-Cholesky-CMA-ES-2009 (OPOC2009) [See Suttorp et al., 2009, MLJ]
-
(1+1)-Cholesky-CMA-ES (OPOC) [See Igel et al., 2006, GECCO]
-
-
Separable Covariance Matrix Adaptation Evolution Strategy (SEPCMAES, sep-CMA-ES) [See Bäck et al., 2013; Ros&Hansen, 2008, PPSN]
-
Main Vector Adaptation Evolution Strategies (MVAES, MVA-ES) [See Poland&Zell, 2001, GECCO]
-
Diagonal Decoding Covariance Matrix Adaptation (DDCMA, dd-CMA) [See Akimoto&Hansen, 2019, ECJ]
-
Covariance Matrix Self-Adaptation with Repelling Subpopulations (RSCMSA, RS-CMSA) [See Ahrari et al., 2017, ECJ]
-
Matrix Adaptation Evolution Strategy (MAES, (μ/μ_w,λ)-MA-ES) [See Beyer&Sendhoff, 2017, TEVC]
- Fast Matrix Adaptation Evolution Strategy (FMAES, Fast-(μ/μ_w,λ)-MA-ES) [See Beyer, 2020, GECCO; Loshchilov et al., 2019, TEVC]
-
Self-Adaptation Evolution Strategy (SAES, (μ/μ_I, λ)-σSA-ES) [See e.g. Beyer, 2020, GECCO; Beyer, 2007, Scholarpedia]
-
Cumulative Step-size Adaptation Evolution Strategy (CSAES, (μ/μ,λ)-ES) [See e.g. Hansen et al., 2015; Ostermeier et al., 1994, PPSN]
-
Derandomized Self-Adaptation Evolution Strategy (DSAES, (1,λ)-σSA-ES) [See e.g. Hansen et al., 2015; Ostermeier et al., 1994, ECJ]
-
Schwefel's Self-Adaptation Evolution Strategy (SSAES, (μ/μ,λ)-σSA-ES) [See e.g. Hansen et al., 2015]
-
Rechenberg's (1+1)-Evolution Strategy with 1/5th success rule (RES) [See e.g. Hansen et al., 2015; Kern et al., 2004; Schumer&Steiglitz, 1968, IEEE-TAC]
-
-
-
Natural Evolution Strategies (NES) [See e.g. Wierstra et al., 2014, JMLR; Yi et al., 2009, ICML; Wierstra et al., 2008, CEC]
-
Rank-One Natural Evolution Strategy (R1NES) [See Sun et al., 2013, GECCO]
-
Separable Natural Evolution Strategy (SNES) [See Schaul et al., 2011, GECCO]
-
-
Estimation of Distribution Algorithms (EDA) [See e.g. Larrañaga&Lozano, 2002; Pelikan et al., 2002; Mühlenbein&Paaß, 1996, PPSN]
-
Cross-Entropy Method (CEM) [See e.g. Rubinstein&Kroese, 2004]
-
Particle Swarm Optimization (PSO) [See e.g. Shi&Eberhart, 1998, CEC; Kennedy&Eberhart, 1995, ICNN]
-
CoOperative co-Evolutionary Algorithms (COEA) [See e.g. Gomez et al., 2008, JMLR; Panait et al., 2008, JMLR]
- CoOperative SYnapse NeuroEvolution (COSYNE, CoSyNE) [See Gomez et al., 2008, JMLR]
-
Simulated Annealing (SA) [See e.g. Kirkpatrick et al., 1983, Science; Hastings, 1970, Biometrika; Metropolis et al., 1953, JCP]
-
Enhanced Simulated Annealing (ESA) [See Siarry et al., 1997, ACM-TOMS]
-
Corana et al.' Simulated Annealing (CSA) [See Corana et al., 1987, ACM-TOMS]
-
-
Genetic Algorithms (GA) [See e.g. Forrest, 1993, Science; Holland, 1962, JACM]
-
Evolutionary Programming (EP) [See e.g. Yao et al., 1999, TEVC]
-
Differential Evolution (DE) [See e.g. Storn&Price, 1997, JGO]
-
Direct Search (DS) [See e.g. Wright, 1996; Hooke&Jeeves, 1961, JACM]
- Nelder-Mead Simplex Method (NelderMead) [See Nelder&Mead, 1965, Computer]
-
Random (Stochastic) Search (RS) [See e.g. Rastrigin, 1986; Brooks, 1958, Operations Research]
-
Pure Random Search (PRS) [See e.g. Bergstra and Bengio, 2012, JMLR]
-
Random Hill Climber (RHC) [See e.g. Schaul et al., 2010, JMLR]
-
Annealed Random Hill Climber (ARHC) [See e.g. Schaul et al., 2010, JMLR]
-
Design Philosophy
-
Respect for Beauty (Elegance)
-
From the problem-solving perspective, we empirically prefer to choose the best optimizer for the black-box optimization problem at hand. However, for the new problem, the best optimizer is often unknown in advance (without a prior knowledge). As a rule of thumb, we need to compare a (often small) set of all available/well-known optimizers and choose the best one from them according to some predefined performance criteria. From the research perspective, however, we like beautiful optimizers, though always keeping the “No Free Lunch” theorem in mind. Typically, the beauty of one optimizer comes from the following features: novelty (e.g., GA/PSO), competitive performance on at least one class of problems (e.g., BO), theoretical insights (e.g., CMA-ES/NES), clarity/simplicity (e.g., CEM/EDA), and repeatability.
-
If you find any DFO to meet the above standard, welcome to launch issues or pulls. We will consider it to be included in the
pypop
library. Note that any superficial imitation to the above well-established optimizers ('Old Wine in a New Bottle') will be NOT considered.
-
-
Respect for Diversity
- Given the universality of black-box optimization (BBO) in science and engineering, different research communities have designed different methods and continue to increase. On the one hand, some of these methods may share more or less similarities. On the other hand, they may also show significant differences (w.r.t. motivations / objectives / implementations / practitioners). Therefore, we hope to cover such a diversity from different research communities such as artificial intelligence (particularly machine learning (evolutionary computation and zeroth-order optimization)), mathematical optimization/programming (particularly global optimization), operations research / management science, automatic control, open-source software, and perhaps others.
-
Respect for Originality
-
“It is both enjoyable and educational to hear the ideas directly from the creators”. (From Hennessy, J.L. and Patterson, D.A., 2019. Computer architecture: A quantitative approach (Sixth Edition). Elsevier.)
-
For each optimizer considered here, we expect to give its original/representative reference (including its good implementations/improvements). If you find some important reference missed here, please do NOT hesitate to contact us (we will be happy to add it if necessary).
-
-
Respect for Repeatability
- For randomized search, properly controlling randomness is very crucial to repeat numerical experiments. Here we follow the Random Sampling suggestions from NumPy. In other worlds, you must explicitly set the random seed for each optimizer.
Computational Efficiency
For LSO, computational efficiency is an indispensable performance criterion of DFO in the post-Moore era. To obtain high-performance computation as much as possible, NumPy is heavily used in this library as the base of numerical computation along with SciPy. Sometimes, Numba is also utilized, in order to further accelerate the wall-clock time.
Reference
-
https://sites.google.com/view/benchmarking-network
-
https://sites.google.com/view/benchmarking-network/home/activities/ppsn-2022-workshop
-
Meunier, L., Rakotoarison, H., Wong, P.K., Roziere, B., Rapin, J., Teytaud, O., Moreau, A. and Doerr, C., 2022. Black-box optimization revisited: Improving algorithm selection wizards through massive benchmarking. IEEE Transactions on Evolutionary Computation, 26(3), pp.490-500.
-
Hansen, N., Auger, A., Ros, R., Mersmann, O., Tušar, T. and Brockhoff, D., 2021. COCO: A platform for comparing continuous optimizers in a black-box setting. Optimization Methods and Software, 36(1), pp.114-144.
-
Auger, A. and Hansen, N., 2021, July. Benchmarking: State-of-the-art and beyond. In Proceedings of Genetic and Evolutionary Computation Conference Companion (pp. 339-340). ACM.
-
Varelas, K., El Hara, O.A., Brockhoff, D., Hansen, N., Nguyen, D.M., Tušar, T. and Auger, A., 2020. Benchmarking large-scale continuous optimizers: The bbob-largescale testbed, a COCO software guide and beyond. Applied Soft Computing, 97, p.106737.
-
Moré, J.J. and Wild, S.M., 2009. Benchmarking derivative-free optimization algorithms. SIAM Journal on Optimization, 20(1), pp.172-191.
-
Whitley, D., Rana, S., Dzubera, J. and Mathias, K.E., 1996. Evaluating evolutionary algorithms. Artificial Intelligence, 85(1-2), pp.245-276.
-
Moré, J.J., Garbow, B.S. and Hillstrom, K.E., 1981. Testing unconstrained optimization software. ACM Transactions on Mathematical Software, 7(1), pp.17-41.
-
-
Hutter, F., Kotthoff, L. and Vanschoren, J., 2019. Automated machine learning: Methods, systems, challenges. Springer Nature.
-
Berahas, A.S., Cao, L., Choromanski, K. and Scheinberg, K., 2022. A theoretical and empirical comparison of gradient approximations in derivative-free optimization. Foundations of Computational Mathematics, 22(2), pp.507-560.
-
Kochenderfer, M.J. and Wheeler, T.A., 2019. Algorithms for optimization. MIT Press.
-
Larson, J., Menickelly, M. and Wild, S.M., 2019. Derivative-free optimization methods. Acta Numerica, 28, pp.287-404.
-
Fermi, E., 1952. Numerical solution of a minimum problem. Los Alamos Scientific Lab., Los Alamos, NM.
-
-
Ollivier, Y., Arnold, L., Auger, A. and Hansen, N., 2017. Information-geometric optimization algorithms: A unifying picture via invariance principles. Journal of Machine Learning Research, 18(18), pp.1-65.
-
Akimoto, Y. and Hansen, N., 2022, July. CMA-ES and advanced adaptation mechanisms. In Proceedings of Annual Conference on Genetic and Evolutionary Computation Companion. ACM.
-
Hansel, K., Moos, J. and Derstroff, C., 2021. Benchmarking the natural gradient in policy gradient methods and evolution strategies. Reinforcement Learning Algorithms: Analysis and Applications, pp.69-84.
-
He, X., Zheng, Z. and Zhou, Y., 2021. MMES: Mixture model-based evolution strategy for large-scale optimization. IEEE Transactions on Evolutionary Computation, 25(2), pp.320-333.
-
Li, Z., Lin, X., Zhang, Q. and Liu, H., 2020. Evolution strategies for continuous optimization: A survey of the state-of-the-art. Swarm and Evolutionary Computation, 56, p.100694.
-
Choromanski, K., Pacchiano, A., Parker-Holder, J. and Tang, Y., 2019. From complexity to simplicity: Adaptive es-active subspaces for blackbox optimization. In Advances in Neural Information Processing Systems.
-
Liu, G., Zhao, L., Yang, F., Bian, J., Qin, T., Yu, N. and Liu, T.Y., 2019, July. Trust region evolution strategies. In Proceedings of AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 4352-4359).
-
Loshchilov, I., Glasmachers, T. and Beyer, H.G., 2019. Large scale black-box optimization by limited-memory matrix adaptation. IEEE Transactions on Evolutionary Computation, 23(2), pp.353-358.
-
Varelas, K., Auger, A., Brockhoff, D., Hansen, N., ElHara, O.A., Semet, Y., Kassab, R. and Barbaresco, F., 2018, September. A comparative study of large-scale variants of CMA-ES. In International Conference on Parallel Problem Solving from Nature (pp. 3-15). Springer, Cham.
-
Müller, N. and Glasmachers, T., 2018, September. Challenges in high-dimensional reinforcement learning with evolution strategies. In International Conference on Parallel Problem Solving from Nature (pp. 411-423). Springer, Cham.
-
Li, Z. and Zhang, Q., 2018. A simple yet efficient evolution strategy for large-scale black-box optimization. IEEE Transactions on Evolutionary Computation, 22(5), pp.637-646.
-
Lehman, J., Chen, J., Clune, J. and Stanley, K.O., 2018, July. ES is more than just a traditional finite-difference approximator. In Proceedings of Annual Conference on Genetic and Evolutionary Computation (pp. 450-457). ACM.
-
Loshchilov, I., 2017. LM-CMA: An alternative to L-BFGS for large-scale black box optimization. Evolutionary Computation, 25(1), pp.143-171.
-
Krause, O., Arbonès, D.R. and Igel, C., 2016. CMA-ES with optimal covariance update and storage complexity. In Advances in Neural Information Processing Systems, 29, pp.370-378.
-
Akimoto, Y. and Hansen, N., 2016, July. Projection-based restricted covariance matrix adaptation for high dimension. In Proceedings of Annual Conference on Genetic and Evolutionary Computation (pp. 197-204). ACM.
-
Krause, O. and Igel, C., 2015, January. A more efficient rank-one covariance matrix update for evolution strategies. In Proceedings of ACM Conference on Foundations of Genetic Algorithms (pp. 129-136). ACM.
-
Hansen, N., Arnold, D.V. and Auger, A., 2015. Evolution strategies. In Springer Handbook of Computational Intelligence (pp. 871-898). Springer, Berlin, Heidelberg.
-
Loshchilov, I., 2014, July. A computationally efficient limited memory CMA-ES for large scale optimization. In Proceedings of Annual Conference on Genetic and Evolutionary Computation (pp. 397-404). ACM.
-
Hansen, N., Atamna, A. and Auger, A., 2014, September. How to assess step-size adaptation mechanisms in randomised search. In International Conference on Parallel Problem Solving from Nature (pp. 60-69). Springer, Cham.
-
Akimoto, Y., Auger, A. and Hansen, N., 2014, July. Comparison-based natural gradient optimization in high dimension. In Proceedings of Annual Conference on Genetic and Evolutionary Computation (pp. 373-380). ACM.
-
Hansen, N. and Auger, A., 2014. Principled design of continuous stochastic search: From theory to practice. In Theory and Principled Methods for the Design of Metaheuristics (pp. 145-180). Springer, Berlin, Heidelberg.
-
Bäck, T., Foussette, C. and Krause, P., 2013. Contemporary evolution strategies. Berlin: Springer.
-
Rudolph, G., 2012. Evolutionary strategies. In Handbook of Natural Computing (pp. 673-698). Springer Berlin, Heidelberg.
-
Akimoto, Y., Nagata, Y., Ono, I. and Kobayashi, S., 2012. Theoretical foundation for CMA-ES from information geometry perspective. Algorithmica, 64(4), pp.698-716.
-
Akimoto, Y., 2011. Design of evolutionary computation for continuous optimization. Doctoral Dissertation, Tokyo Institute of Technology.
-
Arnold, D.V. and Hansen, N., 2010, July. Active covariance matrix adaptation for the (1+1)-CMA-ES. In Proceedings of Annual Conference on Genetic and Evolutionary Computation (pp. 385-392). ACM.
-
Heidrich-Meisner, V. and Igel, C., 2009, June. Hoeffding and Bernstein races for selecting policies in evolutionary direct policy search. In Proceedings of International Conference on Machine Learning (pp. 401-408).
-
Suttorp, T., Hansen, N. and Igel, C., 2009. Efficient covariance matrix update for variable metric evolution strategies. Machine Learning, 75(2), pp.167-197.
-
Heidrich-Meisner, V. and Igel, C., 2008, September. Evolution strategies for direct policy search. In International Conference on Parallel Problem Solving from Nature (pp. 428-437). Springer, Berlin, Heidelberg.
-
Arnold, D.V. and MacLeod, A., 2006, July. Hierarchically organised evolution strategies on the parabolic ridge. In Proceedings of Annual Conference on Genetic and Evolutionary Computation (pp. 437-444). ACM.
-
Igel, C., Suttorp, T. and Hansen, N., 2006, July. A computational efficient covariance matrix update and a (1+1)-CMA for evolution strategies. In Proceedings of Annual Conference on Genetic and Evolutionary Computation (pp. 453-460). ACM.
-
Hansen, N., Müller, S.D. and Koumoutsakos, P., 2003. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation, 11(1), pp.1-18.
-
Beyer, H.G. and Schwefel, H.P., 2002. Evolution strategies–A comprehensive introduction. Natural Computing, 1(1), pp.3-52.
-
Hansen, N. and Ostermeier, A., 2001. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2), pp.159-195.
-
Hansen, N. and Ostermeier, A., 1996, May. Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. In Proceedings of IEEE International Conference on Evolutionary Computation (pp. 312-317). IEEE.
-
Rudolph, G., 1992. On correlated mutations in evolution strategies. In International Conference on Parallel Problem Solving from Nature (pp. 105-114).
-
Rechenberg, I., 1989. Evolution strategy: Nature’s way of optimization. In Optimization: Methods and Applications, Possibilities and Limitations (pp. 106-126). Springer, Berlin, Heidelberg.
-
Schwefel, H.P., 1984. Evolution strategies: A family of non-linear optimization techniques based on imitating some principles of organic evolution. Annals of Operations Research, 1(2), pp.165-167.
-
Rechenberg, I., 1984. The evolution strategy. A mathematical model of darwinian evolution. In Synergetics—from Microscopic to Macroscopic Order (pp. 122-132). Springer, Berlin, Heidelberg.
-
-
Eiben, A.E. and Smith, J., 2015. From evolutionary computation to the evolution of things. Nature, 521(7553), pp.476-482.
-
Miikkulainen, R. and Forrest, S., 2021. A biological perspective on evolutionary computation. Nature Machine Intelligence, 3(1), pp.9-15.
-
Beyer, H.G. and Deb, K., 2001. On self-adaptive features in real-parameter evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 5(3), pp.250-270.
-
Wolpert, D.H. and Macready, W.G., 1997. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), pp.67-82.
-
Bäck, T. and Schwefel, H.P., 1993. An overview of evolutionary algorithms for parameter optimization. Evolutionary Computation, 1(1), pp.1-23.
-
Schaul, T., Bayer, J., Wierstra, D., Sun, Y., Felder, M., Sehnke, F., Rückstieß, T. and Schmidhuber, J., 2010. PyBrain. Journal of Machine Learning Research, 11(24), pp.743-746.
- Schaul, T., 2011. Studies in continuous black-box optimization. Doctoral Dissertation, Technische Universität München.
-
De Boer, P.T., Kroese, D.P., Mannor, S. and Rubinstein, R.Y., 2005. A tutorial on the cross-entropy method. Annals of Operations Research, 134(1), pp.19-67.
- Rubinstein, R.Y. and Kroese, D.P., 2004. The cross-entropy method: A unified approach to combinatorial optimization, Monte-Carlo simulation, and machine learning. New York: Springer.
-
Bonyadi, M.R. and Michalewicz, Z., 2017. Particle swarm optimization for single objective continuous space problems: A review. Evolutionary Computation, 25(1), pp.1-54.
-
Poli, R., Kennedy, J. and Blackwell, T., 2007. Particle swarm optimization. Swarm Intelligence, 1(1), pp.33-57.
-
Eberhart, R.C., Shi, Y. and Kennedy, J., 2001. Swarm intelligence. Elsevier.
-
-
Forrest, S., 1993. Genetic algorithms: Principles of natural selection applied to computation. Science, 261(5123), pp.872-878.
Research Support
This open-source Python library for black-box optimization is now supported by Shenzhen Fundamental Research Program under Grant No. JCYJ20200109141235597 (¥2,000,000 from 2021 to 2023), granted to Prof. Yuhui Shi (CSE, SUSTech @ Shenzhen, China), and actively developed by three of his group members (e.g., Qiqi Duan, Chang Shao, Guochen Zhou).
Now Zhuowei Wang from University of Technology Sydney (UTS) takes part in this library as one core developer (for testing).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.