GRASP-ILS-VND with Path Relinking — direction-agnostic metaheuristic optimizer.
Project description
givp — GRASP-ILS-VND with Path Relinking
A direction-agnostic, NumPy-native metaheuristic optimizer for continuous, integer or mixed black-box problems. The library bundles:
- GRASP — Greedy Randomized Adaptive Search Procedure
- ILS — Iterated Local Search
- VND — Variable Neighborhood Descent (with an adaptive variant)
- Path Relinking between elite solutions
- LRU evaluation cache, convergence monitor, optional thread-parallel candidate evaluation, and a wall-clock time budget
The public API mirrors scipy.optimize: pass an objective callable, bounds and
optional configuration, get back an OptimizeResult with x, fun, nit,
nfev, success, message, direction, meta.
Table of contents
- Install
- Quick start
- Choosing the optimization sense
- Bounds, integer variables and mixed problems
- Object-oriented API and multi-start
- Configuration cookbook
- Inspecting progress (callback and verbose)
- Public API reference
- Glossary of hyper-parameters
- Adapting to a domain-specific model
- Comparison with other optimizers
- Troubleshooting
- License
Install
From PyPI (once published):
pip install givp
From source (editable):
git clone https://github.com/Arnime/grasp_ils_vnd_pr.git
cd grasp_ils_vnd_pr
pip install -e .[dev]
Requires Python 3.10+ and NumPy.
Quick start
import numpy as np
from givp import grasp_ils_vnd_pr
def sphere(x: np.ndarray) -> float:
return float(np.sum(x ** 2))
result = grasp_ils_vnd_pr(sphere, bounds=[(-5.0, 5.0)] * 10)
print(result.x) # best vector found
print(result.fun) # best objective value
print(result.nfev) # number of evaluations performed
Default behavior:
- Minimization (
minimize=True/direction="minimize"). - All variables treated as continuous.
- Default hyper-parameters (
GraspIlsVndConfig()).
Choosing the optimization sense
The library is agnostic to whether you want the lowest or the highest
value of func. Two equivalent ways to declare it:
Boolean flag (recommended)
from givp import grasp_ils_vnd_pr
def gain(x):
return float((x ** 2).sum()) # higher is better
result = grasp_ils_vnd_pr(gain, [(-5, 5)] * 10, minimize=False)
assert result.direction == "maximize"
String flag (SciPy/Optuna compatible)
result = grasp_ils_vnd_pr(gain, [(-5, 5)] * 10, direction="maximize")
Both flags are accepted on grasp_ils_vnd_pr, on GraspOptimizer and on
GraspIlsVndConfig. Setting both simultaneously is allowed only when they
agree; conflicting values raise ValueError.
Internal note. The core algorithm always minimizes. When you ask for maximization the public API wraps your objective with a sign flip and restores the sign on
result.fun. This meansresult.funis always reported in your original sign — no need to negate it back yourself.
Bounds, integer variables and mixed problems
bounds is accepted in two equivalent forms:
# SciPy style: list of (low, high) per variable
bounds = [(-5.0, 5.0), (0.0, 10.0), (-1.0, 1.0)]
# (lower, upper) tuple of two equally-sized sequences
bounds = ([-5.0, 0.0, -1.0], [5.0, 10.0, 1.0])
By default every variable is continuous. To declare a mixed problem (some
continuous variables followed by some integer variables in the decision
vector), use integer_split on the configuration:
from givp import GraspIlsVndConfig, grasp_ils_vnd_pr
n_cont, n_int = 12, 8
bounds = [(-5.0, 5.0)] * n_cont + [(0.0, 4.0)] * n_int
cfg = GraspIlsVndConfig(integer_split=n_cont) # indices >= n_cont are integer
result = grasp_ils_vnd_pr(my_objective, bounds, config=cfg)
Special cases:
integer_split |
Meaning |
|---|---|
None (public API default: num_vars) |
All-continuous problem. |
0 |
All-integer problem. |
n_vars |
All-continuous problem (explicit). |
k (0 < k < n) |
First k continuous, rest integer. |
Object-oriented API and multi-start
When you want to keep configuration around, run the optimizer multiple times
and track the best result automatically, use GraspOptimizer:
from givp import GraspIlsVndConfig, GraspOptimizer
opt = GraspOptimizer(
func=sphere,
bounds=[(-5.0, 5.0)] * 10,
minimize=True,
config=GraspIlsVndConfig(max_iterations=50, time_limit=30.0),
verbose=True,
)
for _ in range(5):
opt.run()
print("best across 5 restarts:", opt.best_fun)
print("history length:", len(opt.history))
opt.best_x and opt.best_fun always reflect the best result observed across
all run() calls, in the user's original sign.
Configuration cookbook
from givp import GraspIlsVndConfig
# 1) Fast triage (small budget, no warm-up)
cfg_fast = GraspIlsVndConfig(
max_iterations=20,
vnd_iterations=50,
ils_iterations=5,
use_elite_pool=False,
use_convergence_monitor=False,
use_cache=True,
)
# 2) Production-quality run with wall-clock budget
cfg_quality = GraspIlsVndConfig(
max_iterations=200,
vnd_iterations=300,
ils_iterations=15,
elite_size=10,
path_relink_frequency=5,
adaptive_alpha=True,
alpha_min=0.05,
alpha_max=0.20,
time_limit=600.0, # stop after 10 minutes
n_workers=4, # parallelize candidate evaluation
)
# 3) Expensive objective: maximize cache reuse, keep evaluations few
cfg_expensive = GraspIlsVndConfig(
num_candidates_per_step=8,
cache_size=50_000,
use_cache=True,
early_stop_threshold=40, # stop earlier on stagnation
)
# 4) Maximization with hourly-shaped layout (3 plants × 24 hours)
cfg_hydro = GraspIlsVndConfig(
minimize=False,
integer_split=72, # first 72 vars continuous, rest integer
max_iterations=120,
time_limit=300.0,
)
Inspecting progress (callback and verbose)
Both grasp_ils_vnd_pr and GraspOptimizer accept:
verbose=True— prints per-iteration cost and cache statistics.iteration_callback=fn— callsfn(iteration_index, best_cost, best_solution)once per outer GRASP iteration. The callback receives the cost in the internal minimization sign (i.e., already sign-flipped if you asked for maximization). Useful to plot convergence or persist intermediate results.
costs = []
def log_iter(i, cost, sol):
costs.append(cost)
result = grasp_ils_vnd_pr(
sphere,
[(-5, 5)] * 10,
iteration_callback=log_iter,
verbose=True,
)
Public API reference
grasp_ils_vnd_pr(...) -> OptimizeResult
grasp_ils_vnd_pr(
func: Callable[[np.ndarray], float],
bounds: Sequence[tuple[float, float]] | tuple[Sequence[float], Sequence[float]],
*,
num_vars: int | None = None,
minimize: bool | None = None,
direction: str | None = None, # 'minimize' or 'maximize'
config: GraspIlsVndConfig | None = None,
initial_guess: Sequence[float] | None = None,
iteration_callback: Callable[[int, float, np.ndarray], None] | None = None,
verbose: bool = False,
) -> OptimizeResult
class GraspOptimizer
Same constructor signature, exposes .run() -> OptimizeResult and tracks
.best_x, .best_fun, .history.
class GraspIlsVndConfig (dataclass)
All hyper-parameters listed in the glossary.
class OptimizeResult
| Field | Type | Meaning |
|---|---|---|
x |
np.ndarray |
Best solution vector. |
fun |
float |
Objective value at x, in the user's original sign. |
nit |
int |
GRASP outer iterations executed. |
nfev |
int |
Number of objective evaluations. |
success |
bool |
True when at least one feasible solution was produced. |
message |
str |
Human-readable termination reason. |
direction |
str |
'minimize' or 'maximize'. |
meta |
dict |
Algorithm-specific extras (cache stats, etc.). |
For backward compatibility the result is iterable: x, fun = result works.
Glossary of hyper-parameters
| Field | Default | Meaning |
|---|---|---|
max_iterations |
100 | GRASP outer iterations. |
alpha |
0.12 | Initial RCL randomization (0 = greedy, 1 = uniform). |
vnd_iterations |
200 | Maximum VND inner iterations. |
ils_iterations |
10 | Iterated Local Search loops per outer iteration. |
perturbation_strength |
4 | Magnitude of ILS perturbation (number of variables jolted). |
use_elite_pool |
True | Maintain a diverse pool of elite solutions for path relinking. |
elite_size |
7 | Maximum number of elite solutions kept. |
path_relink_frequency |
8 | Every N GRASP iterations, run path relinking on elite pairs. |
adaptive_alpha |
True | If True, alpha varies in [alpha_min, alpha_max] over iterations. |
alpha_min / alpha_max |
0.08 / 0.18 | Bounds for adaptive alpha. |
num_candidates_per_step |
20 | Candidates evaluated per construction step. |
use_cache |
True | Memoize evaluations via LRU cache. |
cache_size |
10000 | LRU cache capacity. |
early_stop_threshold |
80 | Iterations without improvement before terminating. |
use_convergence_monitor |
True | Enable diversification/restart heuristics. |
n_workers |
1 | Threads used to evaluate candidates concurrently. |
time_limit |
0.0 | Wall-clock budget in seconds (0 = unlimited). |
minimize |
None |
Boolean direction flag. True = minimize, False = maximize. |
direction |
'minimize' |
String direction flag (alternative form). |
integer_split |
None |
Index where integer variables begin in the decision vector. |
Adapting to a domain-specific model
The library knows nothing about your problem. Wrap your domain code so it
exposes a func(x: np.ndarray) -> float and a list of bounds. Penalty terms,
repair operators and constraint handling all live in your project.
Minimal pattern:
def make_objective(model):
def f(x):
try:
return float(model.evaluate(x))
except (ValueError, RuntimeError):
return float("inf") # treat infeasibility as worst possible cost
return f
result = grasp_ils_vnd_pr(make_objective(my_model), bounds=my_bounds)
For an end-to-end example with a mixed continuous/integer hydropower model,
see the SOG2 adapter in the upstream project repository
(grasp_ils_vnd_pr.py).
Comparison with other optimizers
| Library | Sense convention | Discrete vars? | Built-in cache | Built-in time budget |
|---|---|---|---|---|
scipy.optimize.minimize |
Always minimize | No | No | No |
scipy.optimize.differential_evolution |
Always minimize | Continuous only | No | Via callback |
scipy.optimize.dual_annealing |
Always minimize | No | No | maxiter only |
optuna |
Explicit (direction) |
Yes | Per-trial only | Yes (timeout) |
pygad |
Always maximize | Yes | No | No |
givp |
Explicit (minimize/direction) |
Yes (mixed) | LRU cache | Yes (time_limit) |
Troubleshooting
ValueError: each element of upper must be strictly greater than lower
A bounds entry has low >= high. Even fixed values must use a strictly
positive interval ((v - 1e-9, v + 1e-9)) or be removed from the search.
ValueError: bounds length (...) does not match num_vars (...)
You passed num_vars explicitly but the bounds disagree. Drop num_vars to
let the library infer it from bounds, or fix the mismatch.
ValueError: 'minimize' and 'direction' disagree: ...
You passed both flags with conflicting values. Use one or the other (or pass
both with matching values).
Optimization converges to inf.
Your objective is raising or returning nan. The wrapper coerces non-finite
values to +inf so they are always comparable, but if every candidate is
infeasible the algorithm has nothing to improve. Lower perturbation_strength,
revisit your bounds, or relax the feasibility logic in func.
Run is too slow.
Try use_cache=True, increase cache_size, raise n_workers, lower
num_candidates_per_step, or set a time_limit. For very expensive
objectives, also reduce vnd_iterations and ils_iterations.
Final solution looks too "rough" / integer values look noisy.
Make sure integer_split is set correctly. With the default (None /
num_vars) all variables are treated as continuous and the integer-aware
neighborhoods are skipped.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file givp-0.3.0.tar.gz.
File metadata
- Download URL: givp-0.3.0.tar.gz
- Upload date:
- Size: 45.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
150654e3432daf36accb19771a7dfdb15871c5fdfc178bd41f1c52ac3509bfa8
|
|
| MD5 |
b3d18f2c483e608b0800b149f8631260
|
|
| BLAKE2b-256 |
cf8892fa130afc43674d3f4394c3ef0ab840cadd4384df50c9302f61e61c7f2f
|
Provenance
The following attestation bundles were made for givp-0.3.0.tar.gz:
Publisher:
release.yml on Arnime/grasp_ils_vnd_pr
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
givp-0.3.0.tar.gz -
Subject digest:
150654e3432daf36accb19771a7dfdb15871c5fdfc178bd41f1c52ac3509bfa8 - Sigstore transparency entry: 1360956927
- Sigstore integration time:
-
Permalink:
Arnime/grasp_ils_vnd_pr@8d3f2f4a4cce829d22e1718d9d280e59a9defc10 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Arnime
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@8d3f2f4a4cce829d22e1718d9d280e59a9defc10 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file givp-0.3.0-py3-none-any.whl.
File metadata
- Download URL: givp-0.3.0-py3-none-any.whl
- Upload date:
- Size: 39.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1cc5db9c8c8e9745e8fb0bedf652ff2f19378f78dfd8871fba03e5c12a2cece5
|
|
| MD5 |
adda93f7a6f97ff52c5c739d6ba91b10
|
|
| BLAKE2b-256 |
3dd28482f8b4be686d0fe790bc8caa31dbff8d791e55dc0cf87cc716886e2ee2
|
Provenance
The following attestation bundles were made for givp-0.3.0-py3-none-any.whl:
Publisher:
release.yml on Arnime/grasp_ils_vnd_pr
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
givp-0.3.0-py3-none-any.whl -
Subject digest:
1cc5db9c8c8e9745e8fb0bedf652ff2f19378f78dfd8871fba03e5c12a2cece5 - Sigstore transparency entry: 1360956981
- Sigstore integration time:
-
Permalink:
Arnime/grasp_ils_vnd_pr@8d3f2f4a4cce829d22e1718d9d280e59a9defc10 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Arnime
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@8d3f2f4a4cce829d22e1718d9d280e59a9defc10 -
Trigger Event:
workflow_dispatch
-
Statement type: