Skip to main content

Run python code on remote servers

Project description

PyPI - Python Version PyPI Status PyPI Status

🕹 LabML Remote

labml_remote job-list

labml_remote is a very simple tool that lets you setup python and run python programs on remote computers. It's mainly intended for deep learning model training. It simply connects to remote computers via SSH to run commands, and synchronises content using rsync.

labml_remote comes with a easy-to-use Command-line Interface. You can also use the API to launch custom distributed training sessions. Here is an example.

Install from PIP

pip install labml_remote

Initialize and add server details

Go to your project folder.

cd [PROJECT]

Initialize for remote execution

labml_remote init

labml_remote init asks for your SSH credentials and creates two files [PROJECT].remote/configs.yaml and [PROJECT].remote/exclude.txt.

[PROJECT].remote/configs.yaml keeps the remote configurations for the project. Here's a sample .remote/configs.yaml:

name: sample
servers:
  primary:
    hostname: 3.19.32.53
    private_key: ./.remote/private_key
  secondary:
    hostname: 3.19.32.54

Each of the servers can have the following attributes:

hostname: [IP address of hostname]
private_key: [Location of the private key file; leave blank if not necessary]
username: [Username to SSH with; defaults to 'ubuntu']
password: [Password to connect with; leave blank if not necessary]

.remote/exclude.txt is like .gitignore - it specifies the files and folders that you dont need to sync up with the remote server. The excludes generated by labml_remote init excludes things like .git, .remote, logs and __pycache__. You should edit this if you have things that you don't want to be synced with your remote computer.

See our sample project for a more complex example.

💻 CLI

Get the command line interface help with,

labml_remote --help

Use the flag --help with any command to get the help for that command.

Prepare the servers

labml_remote prepare

This will install Conda on the servers, rsync your project content and install the Python packages based on your requirements.txt or Pipfile.

Run a command

labml_remote run --cmd 'python my_script.py'

This will execute the command on the server and stream the output of it.

Start a job

labml_remote job-run --cmd 'python my_script.py' --tag my-job

List jobs

labml_remote job-list --rsync

--rysnc flag will sync up the job information from server to your local computer before listing.

Tail a job output

labml_remote job-tail --tag my-job

This will keep on tailing the output of the job.

Kill jobs

labml_remote job-kill --tag my-job

Launch a PyTorch distributed training session

labml_remote helper-torch-launch --cmd 'train.py' --nproc-per-node 2 --env GLOO_SOCKET_IFNAME enp1s0

Here train.py is the training script. We are using computers with 2 GPUs, we want two processes per computer so --nproc-per-node is 2. --env GLOO_SOCKET_IFNAME enp1s0 sets environment variable GLOO_SOCKET_IFNAME to enp1s0. You can specify multiple environment variables with --env.

How it works

It sets up miniconda if it is not already installed and create a new environment for the project. Then it creates a folder by the name of the project inside home folder and synchronises the contents of your local folder with the remote computer. It syncs using rsync so subsequent synchronisations only need to send the changes. Then it installs packages from requirements.txt or with pipenv if a Pipfile is found. It will use pipenv to run your commands if a Pipfile is present. The outputs of commands are streamed backed to the local computer and the outputs of jobs redirected to files on the server which are synchronized back to the local computer using rsync.

What it doesn't do

This won't install things like drivers or CUDA. So if you need them you should pick an image that comes with those for your instance. For example, on AWS pick a deep learning AMI if you want to use an instance with GPUs.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

labml_remote-0.1.1.tar.gz (42.9 kB view details)

Uploaded Source

Built Distribution

labml_remote-0.1.1-py3-none-any.whl (58.9 kB view details)

Uploaded Python 3

File details

Details for the file labml_remote-0.1.1.tar.gz.

File metadata

  • Download URL: labml_remote-0.1.1.tar.gz
  • Upload date:
  • Size: 42.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.6.0.post20191030 requests-toolbelt/0.9.1 tqdm/4.43.0 CPython/3.7.5

File hashes

Hashes for labml_remote-0.1.1.tar.gz
Algorithm Hash digest
SHA256 c7a12a500eb3faa7722bcc039e1a3413e7f0e2fe905c7753ebd9298fb036d391
MD5 d30616127b655ce1c6a57fd9ad84c9f1
BLAKE2b-256 0055bc90efaac24472f720c2756af13a56956d1864746133c62ba306eab18e21

See more details on using hashes here.

File details

Details for the file labml_remote-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: labml_remote-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 58.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.6.0.post20191030 requests-toolbelt/0.9.1 tqdm/4.43.0 CPython/3.7.5

File hashes

Hashes for labml_remote-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 487b88d0e2cc1ba89a84195333d77cb11a9e917904452740fe318b6c4c601487
MD5 a2c07adb9e1c89478630bc66266552b0
BLAKE2b-256 acb2e5abf5c0ae6b77ed8d0f7eb8c51018b9747517e547fccdc5b0268391e963

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page