A simple Linux command-line utility which submits a job to one of the multiple GPU servers
Project description
ΣΣJob
ΣΣJob or SumsJob (Simple Utility for Multiple-Servers Job Submission) is a simple Linux command-line utility which submits a job to one of the multiple servers each with limited GPUs. ΣΣJob provides similar key functions for multiple servers as Slurm Workload Manager for supercomputers and computer clusters. It provides the following key functions:
- report the state of GPUs on all servers,
- submit a job to servers for execution in noninteractive mode, i.e., the job will be running in the background of the server,
- submit a job to servers for execution in interactive mode, just as the job is running in your local machine,
- display all running jobs,
- cancel running jobs.
Motivation
Assume you have a few GPU servers: server1
, server2
, ... When you need to run a code from your computer, you will
-
Select one server and log in
$ ssh LAN (You may need to first log in a local area network) $ ssh server1
-
Check GPU status. If no free GPU, go to step 1
$ nvidia-smi
or$ gpustat
-
Copy the code from your computer to the server
$ scp -r codes server1:~/project/codes
-
Run the code in the server
$ cd ~/project/codes $ CUDA_VISIBLE_DEVICES=0 python main.py
-
Transfer back the results
$ scp server1:~/project/codes/results.dat .
These steps are boring. ΣΣJob makes all these steps automatic.
Features
- Simple to use
- Two modes: noninteractive mode, and interactive mode
- Noninteractive mode: the job will be running in the background of the server
- You can turn off your local machine
- Interactive mode: just as the job is running in your local machine
- Display the output of the program in the terminal of your local machine in real time
- Kill the job by Ctrl-C
Commands
- sinfo: Report the state of GPUs on all servers.
- srun: Submit a job to GPU servers for execution.
- sacct: Display all running jobs ordered by the start time.
- scancel: Cancel a running job.
$ sinfo
Report the state of GPUs on all servers. For example,
$ sinfo
chitu Fri Dec 31 20:05:24 2021 470.74
[0] NVIDIA GeForce RTX 3080 | 27'C, 0 % | 2190 / 10018 MB | shuaim:python3/3589(2190M)
[1] NVIDIA GeForce RTX 3080 | 53'C, 7 % | 2159 / 10014 MB | lu:python/241697(2159M)
dilu Fri Dec 31 20:05:26 2021 470.74
[0] NVIDIA GeForce RTX 3080 Ti | 65'C, 73 % | 1672 / 12045 MB | chenxiwu:python/352456(1672M)
[1] NVIDIA GeForce RTX 3080 Ti | 54'C, 83 % | 1610 / 12053 MB | chenxiwu:python/352111(1610M)
Available GPU: chitu [0]
$ srun jobfile [jobname]
Submit a job to GPU servers for execution. Automatically do the following steps:
- Find a GPU with low utilization and sufficient memory (the criterion is in the configuration file).
- If currently no GPU available, it will wait for some time (
-p PERIOD_RETRY
) and then try again, until reaching the maximum retries (-n NUM_RETRY
). - You can also specify the server and GPU by
-s SERVER
and--gpuid GPUID
.
- If currently no GPU available, it will wait for some time (
- Copy the code to the server.
- Run the job on it in noninteractive mode (default) or interactive mode (with
-i
). - Save the output in a log file.
- For interactive mode, when the code finishes, transfer back the result files and the log file.
jobfile
: File to be runjobname
: Job name, and also the folder name of the job. If not provided, a random number will be used.
Options:
-h
,--help
: Show this help message and exit-i
,--interact
: Run the job in interactive mode-s SERVER
,--server SERVER
: Server host name--gpuid GPUID
: GPU ID to be used; -1 to use CPU only-n NUM_RETRY
,--num_retry NUM_RETRY
: Number of times to retry the submission (Default: 1000)-p PERIOD_RETRY
,--period_retry PERIOD_RETRY
: Waiting time (seconds) between two retries after each retry failure (Default: 600)
$ sacct
Display all running jobs ordered by the start time. For example,
$ sacct
Server JobName Start
-------- ---------------- ----------------------
chitu job1 12/31/2021 07:41:08 PM
chitu job2 12/31/2021 08:14:54 PM
dilu job3 12/31/2021 08:15:23 PM
$ scancel jobname
Cancel a running job.
jobname
: Job name.
Installation
ΣΣJob requires Python 3.7 or later. Install with pip
:
$ pip install sumsjob
You also need to do the following:
- Make sure you can
ssh
to each server, ideally without typing the password by SSH keys. - Install gpustat in each server.
- Create a configuration file at
~/.sumsjob/config.py
. Use config.py as a template, and modify the values to your configurations. - Make sure
~/.local/bin
is in your$PATH
.
Then run sinfo
to check if everything works.
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file sumsjob-0.7.2.tar.gz
.
File metadata
- Download URL: sumsjob-0.7.2.tar.gz
- Upload date:
- Size: 21.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e5171221b35e19704c2bd9872ccc8930be0ab1ae63978d511b2c2cfd4d8f70e8 |
|
MD5 | 475a9150475319343100d8d76bb99273 |
|
BLAKE2b-256 | 47c63f04f4e0db388e6e24e9effa1d0d389c20c52848f4d8b8a0607b4c7df8b5 |
File details
Details for the file SumsJob-0.7.2-py3-none-any.whl
.
File metadata
- Download URL: SumsJob-0.7.2-py3-none-any.whl
- Upload date:
- Size: 22.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cbc04c7fe5eed1d141bf7e48ce7dcff679680a7cf164639957080719e6b13be4 |
|
MD5 | 34736530ee84b2abc13cd023eae69c07 |
|
BLAKE2b-256 | 9d0ea6b4a78a95cbb48a89130dc33f993e8b93d8ee2832f89876dac012288a3e |