Skip to main content

A collection of optimization algorithms

Project description

lab_optimizer, with latest version 1.3.1

optimization algorithms packages

  • Provides :

    1. global_optimizer, include algorithms for finding global minimun
    2. local_optimizer, include algorithms for finding local minimun
    3. mloop_optimizer, a general and integral API of its functions
    4. torch_optimizer (only for torch functions), a general and integral API of its functions
    5. powerful visualization tools
    6. physics constants and units conversion
    7. lab_optimizer examples
    8. flexible custom opt algorithm API and visualization API
  • to download this package, using

pip install lab_optimizer

Quick Start

  • basic set :
## general form
opt = XXX_optimize(func,paras_init,bounds,args)
x_opt = opt.optimization() # x_opt is the optimization result
opt.visualization()
"""
Args
--------
fun : callable
    The objective function to be minimized.

        ``fun(x, *args) -> dict : {'cost':float, 'uncer':float, 'bad':bool}``
        
    where ``cost`` is the value to minimize, ``uncer`` is uncertainty,
    ``bad`` is the judge whether this value is bad (bad = True) for this cost
    
    f you set val_only = True, then you can set bad and uncer to anything because they will not be used and default is True

    ``x`` is a 1-D array with shape (n,) and ``args``
    is a tuple of the fixed parameters needed to completely
    specify the function.

paras_init : ndarray, shape (n,)
    Initial guess. Array of real elements of size (n,),
    where ``n`` is the number of independent variables.

args : tuple, optional
    Extra arguments passed to the objective function which will not
    change during optimization
    
bounds : sequence or `Bounds`, optional
    Bounds on variables for Nelder-Mead, L-BFGS-B, TNC, SLSQP, Powell,
    trust-constr, COBYLA, and COBYQA methods. To specify
    the bounds:

        Sequence of ``(min, max)`` pairs for each element in `x`. None is used to specify no bound.

kwArgs
---------
ave_dict : dict
    - ave : Bool
        whethr to use average
    - ave_times : int
        average times
    - ave_wait
        wait times during each ave_run
    - ave_opt
        average operation code, defeault is "ave"
        - "ave" : following cost_dict
        - "std" : use for val_only func, it will cal uncer automatedly
        
    defeault is {False, X, X, X}
    if you set ave == True, then defeault is {True, 3, 0.01,"ave"}

extra_dict : dict
    used for extra parameters for scipy.optimize.minimize family such as jac, hessel ... 

method : str 
    optimization algorithm to use

delay : float 
    delay of each iteration, default is 0.1s

max_run : int 
    maxmun times of running optimization

msg : Bool
    whether to output massages in every iterarion, default is True
    
log : Bool
    whether to generate a log file in folder labopt_logs
    
logfile : str
    log file name , defeault is "optimization__ + <timestamp>__ + <method>__.txt"
    level lower than inherited logfile
    
opt_inherit : opt_class 
    inherit ``optimization results``, ``parameters`` and ``logs``
    defeault is None (not use inherit)
"""

## get example code
from lab_optimizer import examples
examples(opcode = "direct_opt") # direct_opt, inherit_opt, log
  • using examples :

    • do not use opt_inherit
    from lab_optimizer import global_optimize
    opt = global_optimize(func,paras_init,bounds,args)
    x_opt = opt.optimization()
    opt.visualization()
    
    • use opt_inherit (cascade multi optimizers)
    from lab_optimizer import global_optimize
    opt1 = global_optimize(func,paras_init,bounds,args,log = "inherit")
    x_opt1 = opt1.optimization()
    # x_opt1 = opt.x_optimize ## you can also use this one
    opt2 = global_optimize(func,x_opt1,bounds,args,opt_inherit = opt1) # paras_init will be automatically set to x_opt1 
    opt2.optimization()
    opt2.visualization()
    
    • Generic functional interface

    to use another optimization algorithm, you only need to change the opt_class and just a little about its args

    from lab_optimizer import global_optimize, local_optimize
    opt1 = global_optimize(func,paras_init,bounds,args,log = "inherit")
    x_opt1 = opt1.optimization()
    opt2 = local_optimize(func,x_opt1,bounds,args,opt_inherit = opt1) # just change opt_class from global_opt to local_opt
    opt2.optimization()
    opt2.visualization()
    
  • units module

from lab_optimizer import units
"""
then you can easily use physics constants like :

- planck constant : units.h_const
- velocity of light in vacuum : units.c0_const

and do units conversion :  

- freq = 100*units.THz = 1e5*units.GHz = 1e8*units.MHz = ... # freq = 1e14 Hz
- m = 100*units.kg = 1e5*units.g # m = 100[kg]

the units module uses SI units (kg,m,s)
"""

Documentation

  • local_optimize :

    local_optimize aims at finding local minimum of a function, the local_optimize submodule is constructed based on scipy.optimize.minimize, including all of its supported algorithms :

    • Nelder-Mead (defeault)
    • L-BFGS-B
    • Powell
    • ...
  • global_optimize :

    global_optimize aims at finding global minimum of a function, the global_optimize submodule is constructed based on scipy.optimize and scikit-opt including some powerful algorithms

  • mloop_optimize :

    mloop_optimize inherits all functions of M-LOOP :

    • gaussian_process (defeault)
    • neural_net
    • differential_evolution
    • Nelder-Mead
    • Random
  • torch_optimize :

    torch_optimize aims at optimizing explicit function (which can be expressed explicitly in your code instead of experiment results), using torch based gradient optimization algorithms family

    • 'ASGD' (defeault)
    • 'SGD'
    • 'RMSprop'
    • 'LBFGS' (require extra parameters in extra_dict)
    • 'Rprop'
    • 'Adadelta'
    • 'Adagrade' (cpu_only)
    • 'Adam'
    • 'NAdam'
    • 'RAdam'
    • 'AdamW'
    • 'Adamax'
  • other build_in functions :

    • local_time : get local time since the epoch, return (time.time() + time_zone*3600.0)
    • read_log : get flist and x_vec, which records cost and parameters during opt process
    • log_visual : view optimization results from log
    • opt_random_seed : set random seeds for numpy and torch module to ensure the repeatability of experiments

ReleaseNotes

  • 1.1.x

    add advanced visualization tools :

  • 1.2.x

    add functions to handel optimizations Exceptions

    add optimizer extensions, provide a general interface of custom defined opt algorithms

    add optimizer test function library:

    add opt algorithms library:

  • 1.3.x(building)

    parallelizing in plotting figures, can save much time for large data visualization - finished

    add agent model for expensive func opt jobs - todo

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lab_optimizer-1.3.1.tar.gz (32.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lab_optimizer-1.3.1-py3-none-any.whl (36.0 kB view details)

Uploaded Python 3

File details

Details for the file lab_optimizer-1.3.1.tar.gz.

File metadata

  • Download URL: lab_optimizer-1.3.1.tar.gz
  • Upload date:
  • Size: 32.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for lab_optimizer-1.3.1.tar.gz
Algorithm Hash digest
SHA256 1c7d235b57cbda8053ae2ee9f482eadc5a5a06cecccb8e7057e89e072e5156c8
MD5 bdb64284a080ced7babfc0c453cfc640
BLAKE2b-256 654fe70577575e22d6736f2b1f7cafcebbfc934656b35f41d3e8b0b600e5fedc

See more details on using hashes here.

File details

Details for the file lab_optimizer-1.3.1-py3-none-any.whl.

File metadata

  • Download URL: lab_optimizer-1.3.1-py3-none-any.whl
  • Upload date:
  • Size: 36.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for lab_optimizer-1.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 fdbdb1b5112c3db98d8a2f77a5e239b1c5134856ed318081c93150b9201f7978
MD5 4284b4f4193a46248be63e54c7d37eec
BLAKE2b-256 e5c63fd4ed41cbb253178ff96245a04b497783025d209c186effbc43aa92849c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page