Skip to main content

MATE

Project description

MATE

  • MATE represents Manycore-processor-Accelerated Transfer Entropy computation.

Installation

  • :snake: Anaconda is recommended to use and develop MATE.
  • :penguin: Linux distros are tested and recommended to use and develop MATE.

Install from GitHub repository

First, clone the recent version of this repository.

git clone https://github.com/cxinsys/mate

Now, we need to install MATE as a module.

cd mate
pip install -e .

  • Default backend framework of the 'MATE' class is PyTorch.
  • [recommended] To use PyTorch Lightning framework, you need to use a another class called 'MATELightning' (see MATELightning class)

Install optional frameworks

MATE supports several optional backend frameworks such as CuPy and JAX.
To use optional frameworks, you need to install the framework manually


Install Cupy from Conda-Forge with cudatoolkit supported by your driver

conda install -c conda-forge cupy cuda-version=xx.x (check your CUDA version)

Install JAX with CUDA > 12.x

pip install -U "jax[cuda12]"

Install TensorFlow-GPU with CUDA

python3 -m pip install tensorflow[and-cuda]

Tutorial

MATE class

Create MATE instance

import mate

worker = mate.MATE()

Run MATE

parameters

  • arr: numpy array for transfer entropy calculation, required
  • pair: numpy array for calculation pairs, optional, default: compute possible pairs from all nodes in the arr
  • device: optional, default: 'cpu'
  • device_ids: optional, default: [0] (cpu), [list of whole gpu devices] (gpu)
  • procs_per_device: The number of processes to create per device when using non 'cpu' devices, optional, default: 1
  • batch_size: required
  • kp: kernel percentile, optional, default: 0.5
  • df: history length, optional, default: 1
result_matrix = worker.run(arr=arr,
                           pairs=pairs,
                           device=device,
                           device_ids=device_ids,
                           procs_per_device=procs_per_device,
                           batch_size=batch_size,
                           kp=kp,
                           dt=dt,
                           )

MATELightning class

Create MATELightning instance

parameters

  • arr: numpy array for transfer entropy calculation, required
  • pair: numpy array for calculation pairs, optional, default: compute possible pairs from all nodes in the arr
  • kp: kernel percentile, optional, default: 0.5
  • len_time: total length of expression array, optional, default: column length of array
  • dt: history length of expression array, optional, default: 1
import mate

worker = mate.MATELightning(arr=arr,
                            pairs=pairs,
                            kp=kp,
                            len_time=len_time,
                            dt=dt)

Run MATELightning

parameters

MATELightning's run function parameters take the same values as PyTorch's DataLoader and PyTorch Lightning's Trainer. For additional options for parameters, see those documents

  • device: required, 'gpu', 'cpu' or 'tpu' etc
  • devices: required, int or [list of device id]
  • batch_size: required
  • num_workers: optional, default: 0
result_matrix = worker.run(device=device,
                           devices=devices,
                           batch_size=batch_size,
                           num_worker=num_worker)

TODO

  • add 'jax' backend module
  • implement 'pytorch lightning' backend module
  • add 'tensorflow' backend module

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mate_cxinsys-0.1.4.tar.gz (551.6 kB view hashes)

Uploaded Source

Built Distribution

mate_cxinsys-0.1.4-py3-none-any.whl (552.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page