Fast group lasso regularised linear models in a sklearnstyle API.
Project description
The group lasso [1] regulariser is a well known method to achieve structured sparsity in machine learning and statistics. The idea is to create nonoverlapping groups of covariates, and recover regression weights in which only a sparse set of these covariate groups have nonzero components.
There are several reasons for why this might be a good idea. Say for example that we have a set of sensors and each of these sensors generate five measurements. We don’t want to maintain an unneccesary number of sensors. If we try normal LASSO regression, then we will get sparse components. However, these sparse components might not correspond to a sparse set of sensors, since they each generate five measurements. If we instead use group LASSO with measurements grouped by which sensor they were measured by, then we will get a sparse set of sensors.
An extension of the group lasso regulariser is the sparse group lasso regulariser [2], which imposes both groupwise sparsity and coefficientwise sparsity. This is done by combining the group lasso penalty with the traditional lasso penalty. In this library, I have implemented an efficient sparse group lasso solver being fully scikitlearn API compliant.
About this project
This project is developed by Yngve Mardal Moe and released under an MIT lisence. I am still working out a few things so changes might come rapidly.
Installation guide
Grouplasso requires Python 3.5+, numpy and scikitlearn. To install grouplasso via pip, simply run the command:
pip install grouplasso
Alternatively, you can manually pull this repository and run the setup.py file:
git clone https://github.com/yngvem/grouplasso.git cd grouplasso python setup.py
Documentation
You can read the full documentation on readthedocs.
Examples
There are several examples that show usage of the library here.
Further work
Fully test with sparse arrays and make examples
Make it easier to work with categorical data
Poisson regression
Implementation details
The problem is solved using the FISTA optimiser [3] with a gradientbased adaptive restarting scheme [4]. No line search is currently implemented, but I hope to look at that later.
Although fast, the FISTA optimiser does not achieve as low loss values as the significantly slower second order interior point methods. This might, at first glance, seem like a problem. However, it does recover the sparsity patterns of the data, which can be used to train a new model with the given subset of the features.
Also, even though the FISTA optimiser is not meant for stochastic optimisation, it has to my experience not suffered a large fall in performance when the mini batch was large enough. I have therefore implemented minibatch optimisation using FISTA, and thus been able to fit models based on data with ~500 columns and 10 000 000 rows on my moderately priced laptop.
Finally, we note that since FISTA uses Nesterov acceleration, is not a descent algorithm. We can therefore not expect the loss to decrease monotonically.
References
Project details
Release history Release notifications  RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for group_lasso1.5.0py3noneany.whl
Algorithm  Hash digest  

SHA256  a20ad4807834a4438a8829a36e0f355c7633e347aa73502dae8a22fc6e75e977 

MD5  d3dc35910675795d95510b906c1dab65 

BLAKE2b256  6312ca38bf6ce7e97ce1b07652efdcec5e69caa0cce8f738afd66268c186fb3b 