Skip to main content

An Exact L0-Problem Solver

Project description

El0ps: An Exact L0-Problem Solver

Documentation Python 3.9+ PyPI version codecov Test Status License

el0ps is a Python package providing utilities to handle L0-regularized optimization problems expressed as

$$\textstyle\min_{\mathbf{x} \in \mathbb{R}^{n}} f(\mathbf{Ax}) + \lambda||\mathbf{x}||_0 + h(\mathbf{x})$$

appearing in several applications. These problems aim at minimizing a trade off between a data-fidelity function $f$ composed with a matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$ and the L0-norm which counts the number of non-zeros in its argument to promote sparse solutions. The additional penalty function $h$ can be used to enforce other desirable properties on the solutions and is involved in the construction of efficient solution methods.

The package includes

  • A flexible framework with built-in problem instances and the possibility to define custom ones,
  • A state-of-the-art solver based on a specialized Branch-and-Bound algorithm,
  • A scikit-learn compatible interface providing linear model estimators based on L0-regularized optimization problems.

Check out the documentation for a starting tour of the package.

Installation

el0ps is available on pypi and its latest version can be installed as follows:

pip install el0ps

Quick start

el0ps addresses L0-regularized optimization problems

Creating and solving problem instances

An instance of L0-regularized problem can be created and solved using few lines of code. The following example illustrates how to use built-in utilities provided by el0ps to instantiate an solve a problem.

from sklearn.datasets import make_regression
from el0ps.datafit import Leastsquares
from el0ps.penalty import L2norm
from el0ps.solver import BnbSolver

# Generate sparse regression data using sklearn
A, y = make_regression(n_samples=30, n_features=50, n_informative=5)

# Instantiate a least-squares loss f(w) = 0.5 * ||y - w||_2^2
datafit = Leastsquares(y)

# Instantiate an L2-norm penalty h(x) = beta * ||x||_2^2
penalty = L2norm(beta=0.1)

# Set the L0-regularization weight
lmbd = 10.0

# Solve the corresponding problem with el0ps' solver
solver = BnbSolver()
result = solver.solve(datafit, penalty, A, lmbd)

# Displays of result
>>> Result
>>>   Status     : optimal
>>>   Solve time : 0.045835 seconds
>>>   Iter count : 583
>>>   Objective  : 707.432177
>>>   Non-zeros  : 5

Various options can be passed to the BnbSolver class to tune its behavior. The problem solution can be recovered from the result.x attribute. Several other statistics on the solution process are also available in the result object.

Fitting regularization paths

el0ps also provides a convenient pipeline to fit regularization paths, that is, solve an L0-regularized problem over a grid of parameter $\lambda$.

You can also fit a regularization path where problem $(\mathcal{P})$ is solved over a grid of $\lambda$. Fitting a path with lmbd_num different values of this parameter logarithmically spaced from some lmbd_max to some lmbd_min can be simply done as follows.

from el0ps.path import Path

path = Path(lmbd_max=1e-0, lmbd_min=1e-2, lmbd_num=20)
data = path.fit(solver, datafit, penalty, A)

Once the path is fitted, you can recover different statistics data variable such as the number of non-zeros in the solution, the datafit value or the solution time. Various other options can be passed to the path object. An option of interest is the lmbd_scaled which is False by default. When setting lmbd_scaled=True, the values of the parameter $\lambda$ are scaled so that the first solution constructed in the path when lmbd=lmbd_max correponds to the all-zero vector.

Scikit-Learn estimators

el0ps also provides scikit-learn compatible estimators based on problem $(\mathcal{P})$. They can be used similarly to any other estimator in the package pipeline as follows.

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from el0ps.estimator import L0L2Regressor

# Generate sparse regression data
A, y = make_regression(n_informative=5, n_samples=100, n_features=200)

# Split training and testing sets
A_train, A_test, y_train, y_test = train_test_split(A, y)

# Initialize a regressor with L0L2-norm regularization
estimator = L0L2Regressor(lmbd=0.1, beta=1.)

# Fit and score the estimator manually ...
estimator.fit(A_train, y_train)
estimator.score(A_test, y_test)

# ... or in a pipeline
pipeline = Pipeline([('estimator', estimator)])
pipeline.fit(A_train, y_train)
pipeline.score(A_test, y_test)

Like datafit and penalty functions, you can build your own estimators.

Contribute

el0ps is still in its early stages of development. Feel free to contribute by report any bug on the issue page or by opening a pull request. Any feedback or contribution is welcome. Check out the Contribution page for more information.

Cite

el0ps is distributed under AGPL v3 license. Please cite the package as follows:

@inproceedings{guyard2024el0ps,
    title        = {A New Branch-and-Bound Pruning Framework for L0-Regularized Problems},
    author       = {Guyard, Th{\'e}o and Herzet, C{\'e}dric and Elvira, Cl{\'e}ment and Ayse-Nur Arslan},
    booktitle    = {International Conference on Machine Learning (ICML)},
    year         = {2024},
    organization = {PMLR},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

el0ps-0.0.3.tar.gz (67.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

el0ps-0.0.3-py3-none-any.whl (71.3 kB view details)

Uploaded Python 3

File details

Details for the file el0ps-0.0.3.tar.gz.

File metadata

  • Download URL: el0ps-0.0.3.tar.gz
  • Upload date:
  • Size: 67.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.14

File hashes

Hashes for el0ps-0.0.3.tar.gz
Algorithm Hash digest
SHA256 6beb514723f8962161e35618241fc483ff66ccc3fe773908a2d6dd00feede94f
MD5 7f9e7c88e5575d2d0237df18862a43b6
BLAKE2b-256 27b7afe14857870faf18d75e9ca82d118c07ed8646de78f3a32ae05565179519

See more details on using hashes here.

File details

Details for the file el0ps-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: el0ps-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 71.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.14

File hashes

Hashes for el0ps-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 96b0000019eb0a567d3e833f0c06f08f57460667a283bf4a0b6ef944c7a498e3
MD5 7a9db3645d9c196ceb9ee5e01a65bd33
BLAKE2b-256 fe12cdc89c3cf296f8c4db09d19ebfdf79d7506113dbd47ff0a843f0b9e1b9ab

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page