Documentation is available at Read the Docs
HOD is a set of scripts to start services, for example a Hadoop cluster, from within another resource management system (i.e. Torque/PBS). As such, it allows traditional users of HPC systems to experiment with Hadoop or use it as a production setup if there is no dedicated setup available.
Hadoop is not the only software supported. HOD can also create HBase databases, IPython notebooks, and set up a Spark environment.
There are two main benefits:
- Users can run jobs on a traditional batch cluster. This is good for small to medium Hadoop jobs where the framework is used but having a ‘big data’ cluster isn’t required. At this point the performance benefits of a parallel file system outweigh the ‘share nothing’ architecture of a HDFS style file system.
- Users from different groups can run whichever version of Hadoop they like. This removes the need for painful upgrades to running Yarn clusters and hoping all users’ jobs are backwards compatible.
Hadoop used to ship it’s own HOD (Hadoop On Demand) but it was not maintained and only supported Hadoop without tuning. The HOD code that was shipped with Hadoop 1.0.0 release was buggy to say the least. An attempt was made to make it work on the UGent HPC infrastructure, and although a working Hadoop cluster was realised, it was a nightmare to extend it’s functionality. At that point (April 2012), hanythingondemand was started to be better maintainable and support more tuning and functionality out of the box. For example, HBase was a minimum requirement. Hence, why Hadoop on Demand became ‘Hanything’. Apart from the acronym ‘HOD’ nothing of Hadoop On Demand was reused.
More on the history of Hadoop On Demand can be found in section 2 of this paper on Yarn (PDF)
How does it work?
hanythingondemand works by launching an MPI job which uses the reserved nodes as a cluster-in-a-cluster. These nodes then have the various Hadoop services started on them. Users can launch a job at startup (batch mode) or login to worker nodes (using the hod connect command) where they can interact with their services.
The rest of the requirements can be installed using EasyBuild:
- Python and various libraries.
- eg. on fedora yum install -y mpi4py-mpich2
- If you build this yourself, you will probably need to set the $MPICC environment variable.
- vsc-base - Used for command line parsing.
- vsc-mympirun - Used for setting up the MPI job.
- pbs_python - Used for interacting with the PBS (aka Torque) server.
- Oracle JDK or OpenJDK - both installable with Easybuild
- Hadoop binaries
- eg. the Cloudera distribution versions (used to test HOD)
Example use cases:
Creating an HOD cluster:
# submits a job to start a Hadoop cluster on 16 nodes $ hod create --dist Hadoop-2.3.0-cdh5.0 -n16 --label my-cluster ### Connect to your new cluster. $ hod connect my-cluster ### Then, in your session, you can run your hadoop jobs: $ hadoop jar somejob.jar SomeClass arg1 arg2
‘Set it and forget it’ batch jobs:
# Run a batch job on 1 node: $ hod batch --dist Hadoop-2.3.0-cdh5.0 --label my-cluster --script=my-script.sh