Skip to main content

Hardware-accelerated nonlinear optimization in Taichi

Project description

Taichi BFGS (TIBFGS)

This package implements the Broyden-Fletcher-Goldfarb-Shanno (BFGS) non-linear optimization algorithm in Taichi, enabling massively-parallel solutions to non-convex optimization problems.

Installation

pip install tibfgs

Use

To use tibfgs, define an objective function as a ti.func, for example the Ackley function:

@ti.func
def ackley(x: ti.math.vec2) -> ti.f32:
    return (
        -20 * ti.exp(-0.2 * ti.sqrt(0.5 * x.norm_sqr()))
        - ti.exp(
            0.5 * ti.cos(2 * ti.math.pi * x.x) + 0.5 * ti.cos(2 * ti.math.pi * x.y)
        )
        + ti.math.e
        + 20
    )

Initialize \(n\) initial conditions as an \(n \times m\) Numpy array (where \(m\) is the dimension of the problem, in this case \(m=2\)):

n_particles = int(1e6)
x0 = 4 * np.random.rand(n_particles, 2) - 2

Run the BFGS optimizer in parallel on the GPU with:

df = tibfgs.minimize(
    ackley, x0, gtol=1e-3, eps=1e-5, discard_failures=False
)

Which returns a polars DataFrame of solutions with df.head() like:

shape: (5, 10)
┌───────┬──────────┬───────────────┬───────────────┬───┬───────┬───────┬────────────────────────┬────────────────────────┐
│ index ┆ fun      ┆ xk            ┆ x0            ┆ … ┆ feval ┆ geval ┆ gradient               ┆ hessian_inverse        │
│ ---   ┆ ---      ┆ ---           ┆ ---           ┆   ┆ ---   ┆ ---   ┆ ---                    ┆ ---                    │
│ u32   ┆ f32      ┆ array[f32, 2] ┆ array[f32, 2] ┆   ┆ i32   ┆ i32   ┆ array[f32, 2]          ┆ array[f32, (2, 2)]     │
╞═══════╪══════════╪═══════════════╪═══════════════╪═══╪═══════╪═══════╪════════════════════════╪════════════════════════╡
│ 0     ┆ 5.381865 ┆ [-0.982348,   ┆ [-0.968771,   ┆ … ┆ 39    ┆ 13    ┆ [0.190735, 0.190735]   ┆ [[0.617119,            │
│       ┆          ┆ 1.964832]     ┆ 1.981125]     ┆   ┆       ┆       ┆                        ┆ -0.479458], [-0.47…    │
│ 1     ┆ 0.000011 ┆ [-5.7371e-7,  ┆ [1.487202,    ┆ … ┆ 147   ┆ 49    ┆ [1.716614, 0.38147]    ┆ [[0.000008, 0.000002], │
│       ┆          ┆ -0.000005]    ┆ -1.334406]    ┆   ┆       ┆       ┆                        ┆ [0.0000…               │
│ 2     ┆ 3.574453 ┆ [-0.968429,   ┆ [-0.806571,   ┆ … ┆ 36    ┆ 12    ┆ [0.0, 0.0]             ┆ [[0.016302, -0.00451], │
│       ┆          ┆ -0.968395]    ┆ -0.820057]    ┆   ┆       ┆       ┆                        ┆ [-0.004…               │
│ 3     ┆ 4.884083 ┆ [0.000768,    ┆ [-0.342866,   ┆ … ┆ 33    ┆ 11    ┆ [0.0, 0.0]             ┆ [[0.020406, 0.000647], │
│       ┆          ┆ -1.959207]    ┆ -1.756427]    ┆   ┆       ┆       ┆                        ┆ [0.0006…               │
│ 4     ┆ 0.000011 ┆ [0.000003,    ┆ [1.534462,    ┆ … ┆ 111   ┆ 37    ┆ [2.861023, 2.861023]   ┆ [[0.000299,            │
│       ┆          ┆ 0.000003]     ┆ 0.979372]     ┆   ┆       ┆       ┆                        ┆ -0.000029], [-0.00…    │
└───────┴──────────┴───────────────┴───────────────┴───┴───────┴───────┴────────────────────────┴────────────────────────┘

The full schema of this dataframe is:

┌─────────────────┬──────────────────────────────┐
│ Column          ┆ Type                         │
│ ---             ┆ ---                          │
│ str             ┆ object                       │
╞═════════════════╪══════════════════════════════╡
│ index           ┆ UInt32                       │
│ fun             ┆ Float32                      │
│ xk              ┆ Array(Float32, shape=(2,))   │
│ x0              ┆ Array(Float32, shape=(2,))   │
│ message         ┆ String                       │
│ iterations      ┆ UInt16                       │
│ feval           ┆ Int32                        │
│ geval           ┆ Int32                        │
│ gradient        ┆ Array(Float32, shape=(2,))   │
│ hessian_inverse ┆ Array(Float32, shape=(2, 2)) │
└─────────────────┴──────────────────────────────┘

Where:

  • index is the particle initial condition index in x0

  • fun is the objective function value at the converged solution

  • xk is the converged solution

  • x0 is the provided initial guess

  • message describes the convergence state

  • iterations is the number of state update iterations used to reach convergence

  • feval is the number of objective function evaluations (including evaluations during gradient evaluation) over all iterations

  • geval is the number of gradient evaluations across all iterations

  • gradient is the gradient at the converged xk

  • hessian_inverse is the inverse of the Hessian matrix maintained by BFGS internally, at the converged xk

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tibfgs-0.0.4.tar.gz (12.5 kB view details)

Uploaded Source

File details

Details for the file tibfgs-0.0.4.tar.gz.

File metadata

  • Download URL: tibfgs-0.0.4.tar.gz
  • Upload date:
  • Size: 12.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.4

File hashes

Hashes for tibfgs-0.0.4.tar.gz
Algorithm Hash digest
SHA256 7bb85a0f139b1463b25fa0c71c5fda1f575fde00cb60eb060f75fe5ef7845f6f
MD5 26f9444364fbf15fb322b1acc8078f1c
BLAKE2b-256 16f2ba7e04b863df4bc84db532cac6e845466ff3082dc7f5af4bd96160edae3a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page