Runhouse: A multiplayer cloud compute and data environment
Project description
🏃♀️ Runhouse 🏠
🚨 Caution: This is an Alpha 🚨
Runhouse is heavily under development. We are sharing it with a few select people to collect feedback, and expect to iterate on the APIs considerably before reaching beta (version 0.1.0).
👵 Welcome Home!
PyTorch lets you send a model or tensor .to(device)
, so
why can't you do my_fn.to('a_gcp_a100')
or my_table.to('parquet_in_s3')
?
Runhouse allows just that: send code and data to any of your compute or
data infra (with your own cloud creds), all in Python, and continue to use them
eagerly exactly as they were.
Runhouse is for ML Researchers, Engineers, and Data Scientists who are tired of:
- 🚜 manually shuttling code and data around between their local machine, remote instances, and cloud storage,
- 📤📥 constantly spinning up and down boxes,
- 🐜 debugging over ssh and notebook tunnels,
- 🧑🔧 translating their code into a pipeline DSL just to use multiple hardware types,
- 🪦 debugging in an orchestrator,
- 👩✈️ missing out on fancy LLM IDE features,
- 🕵️ and struggling to find their teammates' code and data artifacts.
By way of a visual,
Take a look at this code (adapted from our first tutorial):
import runhouse as rh
from diffusers import StableDiffusionPipeline
import torch
def sd_generate(prompt, num_images=1, steps=100, guidance_scale=7.5, model_id='stabilityai/stable-diffusion-2-base'):
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision='fp16').to('cuda')
return pipe([prompt] * num_images, num_inference_steps=steps, guidance_scale=guidance_scale).images
if __name__ == "__main__":
gpu = rh.cluster(name='rh-v100', instance_type='V100:1', provider='gcp')
generate_gpu = rh.send(fn=sd_generate).to(gpu, reqs=['./', 'torch==1.12.0', 'diffusers'])
images = generate_gpu('A digital illustration of a woman running on the roof of a house.', num_images=2, steps=50)
[image.show() for image in images]
There's no magic yaml, DSL, code serialization, or "submitting for execution." We're just spinning up the cluster for you (or using an existing cluster), syncing over your code, starting a gRPC connection, and running your code on the cluster.
Runhouse does things for you that you'd spend time doing yourself, in as obvious a way as possible.
And because it's not stateless, we can pin the model to GPU memory, and get ~1.5s/image inference before any compilation.
On the data side, we can do things like:
# Send a folder up to a cluster (rsync)
rh.folder(url=input_images_dir).to(fs=gpu, url='dreambooth/instance_images')
# Stream a table in from anywhere (S3, GCS, local, etc)
preprocessed_yelp = rh.table(name="preprocessed-tokenized-dataset")
for batch in preprocessed_table.stream(batch_size=batch_size):
...
# Send a model checkpoint up to blob storage
trained_model = rh.blob(data=pickle.dumps(model))
trained_model.to('s3', url='runhouse/my_bucket').save(name='yelp_fine_tuned_bert')
These APIs work from anywhere with a Python interpreter and an internet connection, so notebooks, scripts, pipeline DSLs, etc. are all fair game. We currently support AWS, GCP, Azure, and Lambda Labs credentials through SkyPilot, as well as BYO cluster (just drop in an ip address and ssh key).
🐣 Getting Started
tldr;
pip install runhouse
# Or "runhouse[aws]", "runhouse[gcp]", "runhouse[azure]", "runhouse[all]"
sky check
# Optionally, for portability (e.g. Colab):
runhouse login
🔌 Installation
⚠️ On Apple M1 or M2 machines ⚠️, you will need to install grpcio with conda
before you install Runhouse - more specifically, before you install Ray.
If you already have Ray installed, you can skip this.
See here
for how to install grpc properly on Apple silicon. You'll only know if you did
this correctly if you run ray.init()
in a Python interpreter. If you're
having trouble with this, let us know.
Runhouse can be installed with:
pip install runhouse
Depending on which cloud providers you plan to use, you can also install the following additional dependencies (to install the right versions of tools like boto, gsutil, etc.):
pip install "runhouse[aws]"
pip install "runhouse[gcp]"
pip install "runhouse[azure]"
# Or
pip install "runhouse[all]"
As this is an alpha, we push feature updates every few weeks as new microversions.
✈️ Verifying your Cloud Setup with SkyPilot
Runhouse supports both BYO cluster, where you interact with existing compute via their IP address and SSH key, and autoscaled clusters, where we spin up and down cloud instances in your own cloud account for you. If you only plan to use BYO clusters, you can disregard the following.
Runhouse uses SkyPilot for much of the heavy lifting with launching and terminating cloud instances. We love it and you should throw them a Github star ⭐️.
To verify that your cloud credentials are set up correctly for autoscaling, run
sky check
in your command line. This will confirm which cloud providers are ready to use, and will give detailed instructions if any setup is incomplete. SkyPilot also provides an excellent suite of CLI commands for basic instance management operations. There are a few that you'll be reaching for frequently when using Runhouse with autoscaling that you should familiarize yourself with, here.
🔒 Creating a Runhouse Account for Secrets and Portability
Using Runhouse with only the OSS Python package is perfectly fine. However,
you can unlock some unique portability features by creating an (always free)
account on api.run.house and saving your secrets and/or
resource metadata there. For example, you can open a Google Colab, call runhouse login
,
and all of your secrets or resources will be ready to use there with no additional setup.
Think of the OSS-package-only experience as akin to Microsoft Office,
while creating an account will make your cloud resources sharable and accessible
from anywhere like Google Docs. You
can see examples of this portability in the
Runhouse Tutorials.
To create an account, visit api.run.house,
or simply call runhouse login
from the command line (or
rh.login()
from Python).
Note: These portability features only ever store light metadata about your resources (e.g. my_folder_name -> [provider, bucket, path]) on our API servers. All the actual data and compute stays inside your own cloud account and never hits our servers. The Secrets service stores your secrets in Hashicorp Vault (an industry standard for secrets management), and our secrets APIs simply call Vault's APIs. We never store secrets on our API servers. We plan to add support for BYO secrets management shortly. Let us know if you need it and which system you use.
👨🏫 Tutorials / API Walkthrough / Docs
Can be found here. We're planning to do a docs sprint in late February, but for now, our tutorials have been structured to provide a comprehensive walkthrough of the APIs.
🙋♂️ Getting Help
Please join our discord server here to message us, or email us (first name at run.house), or create an issue.
👷♀️ Contributing
We welcome contributions! Please contact us if you're interested.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file runhouse-0.0.2.2.tar.gz
.
File metadata
- Download URL: runhouse-0.0.2.2.tar.gz
- Upload date:
- Size: 90.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 650d5a33d9f7bb924330d3f34b4591bbb6c5bb0ba0b7a9f5f8acff68d8f38dde |
|
MD5 | 7cbe0a2f2727a49ae1e65c75e61f9867 |
|
BLAKE2b-256 | c7be17aaa4406c8394cd9d2400efa36747b5d2cdbeb7515975db9f7a658b440e |
File details
Details for the file runhouse-0.0.2.2-py3-none-any.whl
.
File metadata
- Download URL: runhouse-0.0.2.2-py3-none-any.whl
- Upload date:
- Size: 107.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8d35c7a9b0be236ee40642927e3a66fa4acd6e7d18e4afabd1457908afafce3c |
|
MD5 | 1b08a4bc457615149761e37fd3ecb5e8 |
|
BLAKE2b-256 | 65e7f9a44f48e4ecc00460e0f8ca006d4c0f024dc3680daa320c0d6398ad5f6d |