A library that implements several derivation-free optimization algorithms (such as genetic optimization).
Project description
pyBlindOpt
pyBlindOpt is a library that implements several derivative-free optimization algorithms (including genetic and evolutionary methods).
Currently, it implements thirteen different algorithms:
- Random Search (RS): A baseline optimization method that iteratively generates candidate solutions from the search space according to a specified probability distribution (usually uniform) and records the best solution found. It serves as a benchmark for comparing the performance of more complex algorithms.
- Hill Climbing (HC): A mathematical optimization technique belonging to the family of local search algorithms. It is an iterative method that starts with an arbitrary solution and attempts to find a better one by making incremental changes to the current solution.
- Simulated Annealing (SA): A probabilistic technique for approximating the global optimum of a given function. It is a metaheuristic designed to escape local optima by allowing "uphill" moves (worse solutions) with a probability that decreases over time (simulating the cooling process of metallurgy).
- Genetic Algorithm (GA): A metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). GA generates high-quality solutions by relying on biologically inspired operators such as mutation, crossover, and selection.
- Differential Evolution (DE): A population-based method that optimizes a problem by iteratively improving a candidate solution with regard to a given measure of quality. It makes few to no assumptions about the problem being optimized and is effective for searching very large spaces of candidate solutions.
- Particle Swarm Optimization (PSO): A computational method that optimizes a problem by iteratively improving a candidate solution (particle) with regard to a given measure of quality. Particles move around the search space according to simple mathematical formulas involving their position and velocity. Each particle's movement is guided by its local best-known position and the global best-known position in the search space.
- Grey Wolf Optimization (GWO): A population-based metaheuristic algorithm that simulates the leadership hierarchy (Alpha, Beta, Delta, and Omega) and hunting mechanism of grey wolves in nature.
- Enhanced Grey Wolf Optimization (EGWO): An advanced variant of the standard GWO that incorporates mechanisms to better balance exploration and exploitation. This modification helps prevent the algorithm from stagnating in local optima, improving convergence speed and solution quality in complex landscapes.
- Artificial Bee Colony (ABC): Simulates the foraging behavior of honey bees. The colony consists of employed bees (who exploit food sources), onlooker bees (who select sources based on quality), and scout bees (who find new random sources).
- Firefly Algorithm (FA): Inspired by the flashing behavior of fireflies. Fireflies are attracted to each other based on brightness (fitness), but the attractiveness decreases with distance, simulating light absorption.
- Harris Hawks Optimization (HHO): Mimics the cooperative hunting behavior of Harris' hawks, featuring distinct exploration and exploitation phases (like soft and hard besieges) controlled by the prey's escaping energy.
- Cuckoo Search (CS): Based on the brood parasitism of cuckoos. It uses Lévy flights for global exploration to generate new eggs and simulates nest abandonment to avoid local optima.
- Honey Badger Algorithm (HBA): Mimics the intelligent foraging behavior of honey badgers, switching between a "digging" phase (using smell intensity) and a "honey" phase (following a guide bird).
All algorithms take advantage of the joblib library to parallelize objective function evaluations and cache results for improved performance.
Note: The code has been optimized to a certain degree but was primarily created for educational purposes. Please consider libraries like pymoo or SciPy if you require a production-grade implementation. Regardless, reported issues will be fixed whenever possible..
Installation
The library can be installed directly from GitHub by adding the following line to your requirements.txt file:
git+[https://github.com/mariolpantunes/pyBlindOpt@main#egg=pyBlindOpt](https://github.com/mariolpantunes/pyBlindOpt@main#egg=pyBlindOpt)
Alternatively, you can install a specific version from PyPI:
pyBlindOpt>=0.2.0
Examples
Simple Example
This example demonstrates how to run a basic optimization using Simulated Annealing on the Sphere function.
import numpy as np
import pyBlindOpt
# 1. Define the search space (2 Dimensions, range -5.0 to 5.0)
bounds = np.array([[-5.0, 5.0]] * 2)
# 2. Run the optimization
# Usage: pyBlindOpt.simulated_annealing(objective, bounds, ...)
best_pos, best_score = pyBlindOpt.simulated_annealing(
objective=pyBlindOpt.functions.sphere,
bounds=bounds,
n_iter=100,
verbose=True
)
print(f"Best Position: {best_pos}")
print(f"Best Score: {best_score}")
Advanced Example
This example demonstrates a complex workflow:
- Initializing a reproducible random number generator (RNG).
- Creating a Hyper-Latin Cube Sampler (HLC) bound to that RNG.
- Generating an initial population using Opposition-Based Learning (OBL) combined with the HLC sampler.
- Optimizing using Grey Wolf Optimization (GWO), ensuring the custom population and RNG are passed through.
import numpy as np
import pyBlindOpt
# 1. Setup reproducible RNG
seed = 42
rng = np.random.default_rng(seed)
# 2. Define Problem
bounds = np.array([[-100.0, 100.0]] * 10) # 10 Dimensions
objective = pyBlindOpt.functions.rastrigin
n_pop = 20
# 3. Create Sampler (Hyper-Latin Cube)
sampler = pyBlindOpt.utils.HLCSampler(rng)
# 4. Generate Initial Population using Opposition-Based Learning
# Passing 'sampler' as the population argument tells OBL how to sample the base set
initial_pop = pyBlindOpt.init.opposition_based(
objective=objective,
bounds=bounds,
population=sampler, # Use HLC for the random part of OBL
n_pop=n_pop,
seed=rng
)
# 5. Run GWO with the custom population and shared RNG
best_pos, best_score = pyBlindOpt.grey_wolf_optimization(
objective=objective,
bounds=bounds,
population=initial_pop, # Pass the OBL-optimized population
n_iter=200,
n_pop=n_pop,
verbose=True,
seed=rng # Pass the same RNG to ensure reproducibility of GWO internals
)
print(f"Best Position: {best_pos}")
print(f"Best Score: {best_score}")
Documentation
This library is documented using Google-style docstrings. The full documentation can be accessed here.
To generate the documentation locally, run the following command:
pdoc --math -d google -o docs src/pyBlindOpt
Authors
- Mário Antunes - mariolpantunes
License
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pyblindopt-0.2.4.tar.gz.
File metadata
- Download URL: pyblindopt-0.2.4.tar.gz
- Upload date:
- Size: 52.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dedfcedf2a79144a5c9e18f14c27ac2f20d94cc3446af389db6630da6457ab83
|
|
| MD5 |
c9cf3bafad7ff300d14b908cb6dbb2e0
|
|
| BLAKE2b-256 |
a07625d480f776a70c45af2785f9b2c6e5d64b195aed8c63f7cfd2fa21ef6f50
|
File details
Details for the file pyblindopt-0.2.4-py3-none-any.whl.
File metadata
- Download URL: pyblindopt-0.2.4-py3-none-any.whl
- Upload date:
- Size: 54.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a17bc98a2a0d27b8d9d9d6577cae10de3e8949815cbd134624f128b9ded62fe5
|
|
| MD5 |
60f8ff7706e5723962b0323aa63214f5
|
|
| BLAKE2b-256 |
93edcd031ee233ee52fa729fb09c0e328e7a310c9164aa6752822d924b01d0dd
|