Submission of MD runs to HPC with PBS
I’ve started developing this Python program to generate the appropriate files for long classic Molecular Dynamics (MD) runs in the Imperial College HPC facility, using the AMBER MD engine (GPU version).
The objective is to automate the process, so you can chain several jobs and get the results of each one directly to your machine. No more manual edit of your submission scripts, copying restart files back and forth, etc. All is needed is to specify the settings of your simulation in a configuration JSON file and then chain the PBS jobs using dependency on each other.
Maybe this can be useful for other people as well, I think this should be fairly general for other HPC facilities.
Before you start
You need to set up your passwordless ssh from your local machine to the HPC. To test if it works properly, you should be able to scp a file from the HPC to your local machine and not be prompted for your password. Like so:
$ scp username@HPC-hostname:/home/username/test_file.txt . test_file.txt 100% 0 0.0KB/s 00:00
You should also check that rsync is available in your HPC cluster, since it is used to transfer the files (should be available in any Linux distribution, I think).
Create an example input file using the jobsubmitter example command.
- Free software: MIT license
- Documentation: https://JobSubmitter.readthedocs.io.
- First release on PyPI.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size mdrun-0.1.5-py2.py3-none-any.whl (13.2 kB)||File type Wheel||Python version 3.5||Upload date||Hashes View hashes|
|Filename, size mdrun-0.1.5.tar.gz (8.1 kB)||File type Source||Python version None||Upload date||Hashes View hashes|
Hashes for mdrun-0.1.5-py2.py3-none-any.whl