Preconditoned ICA for Real Data
This repository hosts code of the Preconditioned ICA for Real Data (Picard) and Picard-O algorithms.
See the documentation.
Picard is an algorithm for maximum likelihood independent component analysis. It solves the same problem as Infomax, faster. It uses a preconditioned L-BFGS strategy, resulting in a very fast convergence.
Picard-O uses an adaptation of that strategy to solve the same problem under the constraint of whiteness of the signals. It solves the same problem as FastICA, but faster.
To install the package, the simplest way is to use pip to get the latest release:
$ pip install python-picard
or to get the latest version of the code:
$ pip install git+https://github.com/pierreablin/picard.git#egg=picard
To get started, you can build a synthetic signals matrix:
>>> import numpy as np >>> N, T = 3, 1000 >>> S = np.random.laplace(size=(N, T)) >>> A = np.random.randn(N, N) >>> X = np.dot(A, S)
And then feed Picard with it:
>>> from picard import picard >>> Y, W = picard(X)
>>> from picard import picardo >>> Y, W = picardo(X)
Picard or Picard-O output the estimated sources, Y, and estimated unmixing matrix, W.
These are the dependencies to use Picard:
- numpy (>=1.8)
- matplotlib (>=1.3)
- numexpr (>= 2.0)
These are the dependencies to run the EEG example:
- mne (>=0.14)
- scikit-learn (>=0.18)
- scipy (>=0.19)
If you use this code in your project, please cite:
Pierre Ablin, Jean-Francois Cardoso, and Alexandre Gramfort Faster independent component analysis by preconditioning with Hessian approximations ArXiv Preprint, June 2017
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size python-picard-0.1.tar.gz (9.6 kB)||File type Source||Python version None||Upload date||Hashes View|