Skip to main content

Nonlinear Optimization Programming (NoNOP)

Project description

NoNOP: nonlinear optimization programming

Project description

Author: Kaike Sa Teles Rocha Alves

NoNOP (Nonlinear Optimization Programming) is a package that contains new machine learning models developed by Kaike Alves to optimize nonlinear functions. The approach employes genetic algorithms (GA) and an expanding net multilayer perceptron (MLP) model.

Author: Kaike Sa Teles Rocha Alves (PhD)
Email: kaikerochaalves@outlook.com or kaike.alves@estudante.ufjf.br

Description:

NoNOP: A new optimization library for nonlinear functions.

NoNOP (Nonlinear Optimization Programming) is a groundbreaking Python library available on PyPI (https://pypi.org/project/nonop/) that introduces a novel optimization technique that implements GA and MLP. This library is specifically designed to tackle the complexities of nonlinear functions by offering machine learning models that prioritize accuracy.

Instructions

To install the library use the command:

pip install nonop

The library provides the following class:

optimize_function

optimize_function is a class for the optimization of nonlinear functions that applies meta-heuristic techniques

To import it, simply type the command:

from nonop.optimization import optimize_function as optim_func

Hyperparameters:

objective_func : callable - The core mathematical model you want to solve. By passing this as an argument, you decouple the 'what' (the math) from the 'how' (the optimization algorithm).

constraints_func : callable, default=None - A function defining the 'boundaries' of your problem. If None, the optimizer assumes an unconstrained search space.

variables_type : str or type, default=float - The data type of the variables being optimized. This determines if the search space is continuous (float) or discrete (int), impacting how mutations and crossovers are calculated.

variables_lower_limit : float or array-like - The minimum allowable value for the variables. It acts as a boundary to ensure the optimizer stays within a feasible or physical range.

variables_upper_limit : float or array-like - The maximum allowable value for the variables. Together with the lower limit, it defines the search space volume.

rho : float, default=0.9 - The coefficient used for moving averages (often in optimizers like RMSProp). It determines how much weight is given to recent gradients versus past history.

lr : float, default=0.01 - The learning rate for the neural network. This scales the step size during gradient descent; too high can cause instability, while too low can result in painfully slow convergence.

lr_decay_rate : float, default=0.95 - The multiplier applied to the learning rate after each epoch. A value of 0.95 means the learning rate shrinks by 5% periodically to help the model settle into a global minimum.

epochs : int, default=10 - The number of complete passes through the training dataset. More epochs allow for more learning but increase the risk of overfitting.

num_generations : int, default=50 - The number of iterations for the Genetic Algorithm. Think of this as the "timeline" of evolution—how many cycles of selection and reproduction will occur.

sol_per_pop : int, default=20 - The population size (number of candidate solutions) in each generation. A larger population increases diversity but requires more computational resources per generation.

maximize : bool, default=True - Determines the direction of optimization. If True, the algorithm seeks the highest possible fitness score; if False, it minimizes a cost function.

n_min_neurons : int, default=1 - The lower bound for the number of neurons in a hidden layer if the architecture is being searched.

n_max_neurons : int, default=128 - The upper bound for the number of neurons in a hidden layer. This prevents the model from becoming too computationally expensive.

Example of optimize_function:

from nonop.optimization import optimize_function as optim_func
model = optim_func(rules = 4, fuzzy_operator = "min")
model.fit(X_train, y_train)
y_pred = model.predict(X_test)

Example I

import torch
from nonop.optimization import optimize_function as optim_func

def original_objective(x):
    return 2 * x[0] - x[0]**2 + x[1]

def original_constraints(x):
    return (torch.clamp(x[0]**2 + x[1]**2 - 4, min=0)**2 + 
            torch.clamp(x[1] - 1.8, min=0)**2 + 
            torch.clamp(x[0], max=0)**2 + 
            torch.clamp(x[1], max=0)**2)

params = {"objective_func":original_objective, "constraints_func":original_constraints, "epochs":1000, "rho":1e9, "print_information":False, "maximize":True}
model = optim_func(**params)
model.optimize_GA()

Some comments:

  • the function "original_objective(x)" express the equation of the objective function
  • The function "original_constraints(x)" express the constraints of the nonlinear programming as a sum. The expression torch.clamp(x[0]**2 + x[1]**2 - 4, min=0)**2 means $x_{0}^{2} + x_{1}^{2} \leq 4$. Whenever the inequality is <=, use min=0, on the other hand, inequality >=, use max=0. The quadratic operator is suitable in mathematical optimization problems
  • The default is maximize, on the other hand, set the hyperparameter maximize=False

Example II

See below a more complete example:

import torch
from nonop.optimization import optimize_function as optim_func

def original_objective(x):
    return (x[0] - 2)**2 + (x[1] - 2)**2

def original_constraints(x):
    return (torch.clamp(x[0]**2 + x[1]**2 - 4, min=0)**2 + 
torch.clamp(x[0]**2 + x[1]**2 - 1, max=0)**2)

params = {"objective_func":original_objective, 
        "constraints_func":original_constraints, 
        "variables_lower_limit":[-3,-3], 
        "variables_upper_limit":[3,3], 
        "epochs":10000, 
        "print_information":False, 
        "maximize":False,
        "num_generations":10, 
        "num_parents_mating":5,
        "sol_per_pop":10,
        "n_min_neurons":50, 
        "n_max_neurons":300, 
        "n_hidden_layers":5, 
        "max_patience":5}
model = optimize_function(**params)
model.optimize_GA()

Extra information

Code of Conduct:

NoNOP is a library developed by Kaike Alves. Please read the Code of Conduct for guidance.

Call for Contributions:

The project welcomes your expertise and enthusiasm!

Small improvements or fixes are always appreciated. If you are considering larger contributions to the source code, please contact by email first.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nonop-0.0.1.tar.gz (24.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nonop-0.0.1-py3-none-any.whl (25.4 kB view details)

Uploaded Python 3

File details

Details for the file nonop-0.0.1.tar.gz.

File metadata

  • Download URL: nonop-0.0.1.tar.gz
  • Upload date:
  • Size: 24.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for nonop-0.0.1.tar.gz
Algorithm Hash digest
SHA256 0db2f344d1d870fcbd38c78ee93e6fc10142831c6c3c3477dcf0227cb04a6ea0
MD5 ac7ddccfb6a1205dc53c895bd05385c6
BLAKE2b-256 b9048e550cedf3356dbd04f55d30c60dab1b47264ad09aabe174a1c212b64420

See more details on using hashes here.

File details

Details for the file nonop-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: nonop-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 25.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for nonop-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 75fe90c4bbc8f47d008f6d6fb95c93d2f8654ea029a1c7c0329f86c32a3d8c05
MD5 2cb66ef9befe1c3c0f4dde1a8fd4dd7c
BLAKE2b-256 a30c8a0e37c89db7c1e2e39ded716852a5ed3ca4b5bb4de7822210e7086b9f6b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page