A library that takes care of several tedious aspects of working with big data on an HPC cluster.
Project description
Welcome to idact!
Idact, or Interactive Data Analysis Convenience Tools, is a Python 3.5+ library that takes care of several tedious aspects of working with big data on an HPC cluster.
Who is it for?
Data scientists or big data enthusiasts, who:
- Perform computations on Jupyter Notebook, using libraries such as NumPy, pandas, Matplotlib, or bokeh.
- Have access to an HPC cluster with Slurm as the job scheduler.
- Would like to parallelize their computations across many nodes using Dask.distributed, a library for distributed computing.
- May find that it takes too much manual effort to deploy Jupyter Notebook and Dask on the cluster each time they need it.
Requirements
Python 3.5+.
Client
- Operating System: Windows or Linux
- Recommended: Jupyter Notebook or JupyterLab
Cluster
- Operating System: Linux
- Job Scheduler: Slurm Workload Manager
- SSH access to a login (head) node.
- Dask.distributed
- Jupyter Notebook or JupyterLab
Installation
python -m pip install idact
Code samples
Accessing a cluster
Cluster can be accessed with a public/private key pair via SSH.
from idact import *
cluster = add_cluster(name="my-cluster",
user="user",
host="localhost",
port=2222,
auth=AuthMethod.PUBLIC_KEY,
key="~/.ssh/id_rsa",
install_key=False)
node = cluster.get_access_node()
node.connect()
Allocating and deallocating nodes
Nodes are allocated as a Slurm job. Afterwards, they can be used for deployments.
import bitmath
nodes = cluster.allocate_nodes(nodes=8,
cores=12,
memory_per_node=bitmath.GiB(120),
walltime=Walltime(hours=1, minutes=30),
native_args={
'--partition': 'debug',
'--account': 'data-analysis-group'
})
try:
nodes.wait(timeout=120.0)
except TimeoutError:
nodes.cancel()
Deploying Jupyter Notebook
Jupyter Notebook is deployed on a cluster node, and made accessible through an SSH tunnel.
nb = nodes[0].deploy_notebook()
nb.open_in_browser()
Deploying Dask.distributed
Dask.distributed scheduler and workers are deployed on cluster nodes, and their dashboards are made available through SSH tunnels.
dd = deploy_dask(nodes[1:])
client = dd.get_client()
client.submit(...)
dd.diagnostics.open_all()
Managing cluster config
Local and remote cluster configuration can be saved, loaded, and copied to and from the cluster.
save_environment()
load_environment()
push_environment()
pull_environment()
Managing deployments
Deployment objects can be serialized between running program instances, local or remote.
push_deployment(nodes)
push_deployment(nb)
push_deployment(dd)
pull_deployments()
Documentation
The documentation contains detailed API description, tutorial notebooks, and other helpful information.
Source code
The source code is available on GitHub.
License
MIT License.
This library was developed under the supervision of Leszek Grzanka, PhD as a final project of the BEng in Computer Science program at the Faculty of Computer Science, Electronics and Telecommunications at AGH University of Science and Technology, Krakow.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.