Skip to main content

Benchmarks for model-based optimization.

Project description


Evobench is a collection of benchmark problems dedicated for model-based large scale optimization.


This package contains following problems.


  • trap
  • step trap
  • bimodal
  • step bimodal
  • HIFF
  • Ising Spin Glass


  • trap
  • step trap
  • multimodal
  • step multimodal
  • sawtooth


You can create your own benchmark made of other benchmarks.

Getting started

pip install evobench
import evobench

trap = evobench.discrete.Trap(blocks=[4, 4, 4])

population = trap.initialize_population(population_size=1e3)
fitness = trap.evaluate_population(population)

You can also evaluate single solution.

fitness = trap.evaluate_solution([0])

Every time you're evaluating solutions we increment ffe counter. Solution is not evaluated again, if it didn't change. You can access it through benchmark instance.


Ising Spin Glass

To instantiate ISG you need to pass specific problem configuration.

from evobench.discrete import IsingSpinGlass

isg = IsingSpinGlass('IsingSpinGlass_pm_16_0')

You can find 5,000 instances at evobench\discrete\isg\data folder. Instances vary in length and complexity.

Compound Benchmark

Creating your own compound benchmarks is really easy. You just need to define your sub-benchmarks and pass them as a list. All other fuctions work just the same as with the normal Benchmark.

from evobench import CompoundBenchmark
from evobench import continuous, discrete

benchmark = CompoundBenchmark(
        discrete.Trap(blocks=[5, 2, 4]),
        continuous.Trap(blocks=[3, 6, 4])

population = benchmark.initialize_population(population_size=1000)

How to implement your own function

Fully separable

You need to inherit Separable class from evobench.separable. Then just implement def evaluate_block(self, block: np.ndarray) -> int method. Best follow evobench.discrete.trap implementation.


Inherit Benchmark class from evobench.benchmark. Then implement def _evaluate_solution(self, solution: Solution) -> float method.

Linkage quality

Linkage quality metrics are located at evobench.linkage.metrics. Available metrics: - fill quality

Coming soon

We'll be adding more problems in the near future. If you're looking for any particular problem, please mail us or open an issue. We're working on linkage quality metrics. Once they're published, we'll be incorporating them to this package.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for evobench, version 0.4.2
Filename, size File type Python version Upload date Hashes
Filename, size evobench-0.4.2-py3-none-any.whl (12.9 MB) File type Wheel Python version py3 Upload date Hashes View
Filename, size evobench-0.4.2.tar.gz (17.2 kB) File type Source Python version None Upload date Hashes View

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring DigiCert DigiCert EV certificate Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page