Skip to main content

Bio-inspired optimization algorithms for machine learning and scientific computing

Project description

bioopt

Bio-inspired optimization algorithms for machine learning and scientific computing.

Features

  • Swarm Intelligence Algorithms: PSO, ACO (continuous with KDE), ABC, GWO, FA, WOA
  • Pure NumPy + Numba: No scipy dependency, fast JIT-compiled code
  • PyTorch & TensorFlow Integration: Direct model weight optimization without gradients
  • Unified API: All optimizers share the same interface
  • Extensible: Easy to add new algorithms and categories

Installation

pip install bioopt

For PyTorch support:

pip install bioopt[pytorch]

For TensorFlow support:

pip install bioopt[tensorflow]

For development:

pip install bioopt[dev]

Quick Start

Standalone Usage

from bioopt.swarm import PSO, GWO, FA

def sphere(x):
    """Sphere function: f(x) = sum(x^2), minimum at x=0."""
    return sum(x ** 2)

# Create optimizer
pso = PSO(
    n_agents=30,
    bounds=[(-5.0, 5.0)] * 10,  # 10-dimensional problem
    w=0.7298, c1=1.496, c2=1.496,
    seed=42
)

# Run optimization
best_position, best_fitness = pso.optimize(sphere, iterations=100, verbose=True)
print(f"Best fitness: {best_fitness:.6e}")

All Available Algorithms

from bioopt.swarm import PSO, ACO, ABC, GWO, FA, WOA

algorithms = [
    ("PSO", PSO(n_agents=30, bounds=bounds, seed=42)),
    ("ACO", ACO(n_agents=15, bounds=bounds, seed=42)),  # Continuous KDE-based
    ("ABC", ABC(n_agents=30, bounds=bounds, seed=42)),
    ("GWO", GWO(n_agents=30, bounds=bounds, seed=42)),
    ("FA",  FA(n_agents=30, bounds=bounds, seed=42)),
    ("WOA", WOA(n_agents=30, bounds=bounds, seed=42)),
]

for name, opt in algorithms:
    pos, fit = opt.optimize(objective_fn, iterations=100)
    print(f"{name}: best_fitness = {fit:.6e}")

TensorFlow Integration

import tensorflow as tf
from bioopt.swarm import PSO
from bioopt.adapters.tensorflow import TensorFlowAdapter

# Define a simple model
model = tf.keras.Sequential([
    tf.keras.layers.Dense(128, activation='relu', input_shape=(784,)),
    tf.keras.layers.Dense(10, activation='softmax')
])

adapter = TensorFlowAdapter(model)

# Create optimizer with model parameter bounds
pso = PSO(n_agents=20, bounds=adapter.get_bounds(default_low=-1.0, default_high=1.0))

# Define loss function
def loss_fn(outputs, targets):
    return tf.reduce_mean(tf.keras.losses.sparse_categorical_crossentropy(targets, outputs))

# Optimize model weights
best_weights, best_loss = adapter.optimize(
    pso, loss_fn,
    inputs=sample_inputs,
    targets=sample_targets,
    iterations=50,
    verbose=True
)

PyTorch Integration

import torch
import torch.nn as nn
from bioopt.swarm import PSO
from bioopt.adapters.pytorch import PyTorchAdapter

# Define a simple model
class SimpleNet(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(784, 128)
        self.fc2 = nn.Linear(128, 10)
    
    def forward(self, x):
        x = torch.relu(self.fc1(x))
        return self.fc2(x)

model = SimpleNet()
adapter = PyTorchAdapter(model)

# Create optimizer with model parameter bounds
pso = PSO(n_agents=20, bounds=adapter.get_bounds(default_low=-1.0, default_high=1.0))

# Define loss function
def loss_fn(outputs, targets):
    return nn.functional.cross_entropy(outputs, targets)

# Optimize model weights (use a small subset for demo)
best_weights, best_loss = adapter.optimize(
    pso, loss_fn,
    inputs=sample_inputs,
    targets=sample_targets,
    iterations=50,
    verbose=True
)

Benchmark Functions

from bioopt.utils import BenchmarkFunctions

# Available benchmarks (all for minimization)
fns = {
    "sphere":     BenchmarkFunctions.sphere,      # min at 0
    "rosenbrock": BenchmarkFunctions.rosenbrock,   # min at 1
    "rastrigin":  BenchmarkFunctions.rastrigin,    # min at 0
    "ackley":     BenchmarkFunctions.ackley,       # min at 0
    "griewank":   BenchmarkFunctions.griewank,     # min at 0
    "schwefel":   BenchmarkFunctions.schwefel,     # min at 420.9687
    "levy":       BenchmarkFunctions.levy,         # min at 1
}

API Reference

Base Interface

All optimizers inherit from BaseOptimizer and share this interface:

class BaseOptimizer:
    def __init__(self, n_agents: int, bounds, seed: Optional[int] = None):
        ...
    
    def optimize(
        self,
        objective_fn: Callable,  # Function to minimize
        iterations: int = 100,
        verbose: bool = False,
        callback: Optional[Callable] = None,  # (iter, pos, fit)
        **kwargs
    ) -> Tuple[np.ndarray, float]:
        """Run optimization. Returns (best_position, best_fitness)."""
    
    def reset(self) -> None:
        """Reset optimizer state."""
    
    def get_state(self) -> dict:
        """Get checkpoint state."""
    
    def set_state(self, state: dict) -> None:
        """Restore from checkpoint."""

Algorithm-Specific Parameters

Algorithm Key Parameters Default
PSO w, c1, c2, max_velocity w=0.7298, c1=c2=1.496
ACO archive_size, q, xi q=0.01, xi=0.1
ABC limit n_agents * n_dims
GWO (none) -
FA alpha, beta_0, gamma, alpha_decay alpha=0.2, beta_0=1.0, gamma=1.0
WOA (none) -

Project Structure

bioopt/
├── bioopt/
│   ├── __init__.py          # Package exports
│   ├── base.py              # BaseOptimizer class
│   ├── utils.py             # Utilities, benchmarks
│   ├── swarm/               # Swarm intelligence algorithms
│   │   ├── pso.py           # Particle Swarm Optimization
│   │   ├── aco.py           # Ant Colony Optimization (KDE)
│   │   ├── abc.py           # Artificial Bee Colony
│   │   ├── gwo.py           # Grey Wolf Optimizer
│   │   ├── fa.py            # Firefly Algorithm
│   │   └── woa.py           # Whale Optimization Algorithm
│   └── adapters/            # DL framework adapters
│       ├── pytorch.py       # PyTorch integration
│       └── tensorflow.py    # TensorFlow integration
└── tests/
    └── test_all.py          # Algorithm tests

Future Categories

  • bioopt.evolutionary - Genetic Algorithms, Differential Evolution
  • bioopt.physics - Simulated Annealing, Gravitational Search, Black Hole
  • bioopt.plant - Flower Pollination, Invasive Weed Optimization

Contributing

Contributions welcome! Please open issues or PRs.

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bioopt-0.1.0.tar.gz (22.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bioopt-0.1.0-py3-none-any.whl (25.5 kB view details)

Uploaded Python 3

File details

Details for the file bioopt-0.1.0.tar.gz.

File metadata

  • Download URL: bioopt-0.1.0.tar.gz
  • Upload date:
  • Size: 22.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for bioopt-0.1.0.tar.gz
Algorithm Hash digest
SHA256 d50d364b937033efa5f8d112d2bddd8f7a22764eb3d0fd1f2ed79410c76f9f9d
MD5 4af79e63b4baec778cc4a16370006119
BLAKE2b-256 dc286437afc1be95993f08df9e8c4e81214c6f7e7f721cf4235d02e9873f1013

See more details on using hashes here.

Provenance

The following attestation bundles were made for bioopt-0.1.0.tar.gz:

Publisher: python-publish.yml on rehanguha/bioopt

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file bioopt-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: bioopt-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 25.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for bioopt-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f161ab015260c8d17100064c0e5aa25b095ca4c2d24ec79b200f07d4fd771778
MD5 6dcc6695cc1fa4db19555f1298547a65
BLAKE2b-256 5ecdf34c3133e2daf1a9d065158401af3cf5d1daa01819ebb6b1f262bc599002

See more details on using hashes here.

Provenance

The following attestation bundles were made for bioopt-0.1.0-py3-none-any.whl:

Publisher: python-publish.yml on rehanguha/bioopt

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page