Skip to main content

Multiple-target machine learning

Project description

Himalaya: Multiple-target linear models

Github Python License Build Codecov Downloads

Himalaya [1] implements machine learning linear models in Python, focusing on computational efficiency for large numbers of targets.

Use himalaya if you need a library that:

  • estimates linear models on large numbers of targets,

  • runs on CPU and GPU hardware,

  • provides estimators compatible with scikit-learn’s API.

Himalaya is stable (with particular care for backward compatibility) and open for public use (give it a star!).

Example

import numpy as np
n_samples, n_features, n_targets = 10, 5, 4
np.random.seed(0)
X = np.random.randn(n_samples, n_features)
Y = np.random.randn(n_samples, n_targets)

from himalaya.ridge import RidgeCV
model = RidgeCV(alphas=[1, 10, 100])
model.fit(X, Y)
print(model.best_alphas_)  # [ 10. 100.  10. 100.]
  • The model RidgeCV uses the same API as scikit-learn estimators, with methods such as fit, predict, score, etc.

  • The model is able to efficiently fit a large number of targets (routinely used with 100k targets).

  • The model selects the best hyperparameter alpha for each target independently.

More examples

Check more examples of use of himalaya in the gallery of examples.

Tutorials using himalaya for fMRI

Himalaya was designed primarily for functional magnetic resonance imaging (fMRI) encoding models. In depth tutorials about using himalaya for fMRI encoding models can be found at gallantlab/voxelwise_tutorials.

Models

Himalaya implements the following models:

  • Ridge, RidgeCV

  • KernelRidge, KernelRidgeCV

  • GroupRidgeCV, MultipleKernelRidgeCV, WeightedKernelRidge

  • SparseGroupLassoCV

See the model descriptions in the documentation website.

Himalaya backends

Himalaya can be used seamlessly with different backends. The available backends are numpy (default), cupy, torch, and torch_cuda. To change the backend, call:

from himalaya.backend import set_backend
backend = set_backend("torch")

and give torch arrays inputs to the himalaya solvers. For convenience, estimators implementing scikit-learn’s API can cast arrays to the correct input type.

GPU acceleration

To run himalaya on a graphics processing unit (GPU), you can use either the cupy or the torch_cuda backend:

from himalaya.backend import set_backend
backend = set_backend("cupy")  # or "torch_cuda"

data = backend.asarray(data)

Installation

Dependencies

  • Python 3

  • Numpy

  • Scikit-learn

Optional (GPU backends):

  • PyTorch (1.9+ preferred)

  • Cupy

Standard installation

You may install the latest version of himalaya using the package manager pip, which will automatically download himalaya from the Python Package Index (PyPI):

pip install himalaya

Installation from source

To install himalaya from the latest source (main branch), you may call:

pip install git+https://github.com/gallantlab/himalaya.git

Developers can also install himalaya in editable mode via:

git clone https://github.com/gallantlab/himalaya
cd himalaya
pip install --editable .

Cite this package

If you use himalaya in your work, please give it a star, and cite our publication:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

himalaya-0.4.4.tar.gz (70.8 kB view hashes)

Uploaded Source

Built Distribution

himalaya-0.4.4-py3-none-any.whl (83.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page