Skip to main content

A system for parallel and distributed Python that unifies the ML ecosystem.

Project description

https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png https://travis-ci.com/ray-project/ray.svg?branch=master https://readthedocs.org/projects/ray/badge/?version=latest

Ray is a fast and simple framework for building and running distributed applications.

Ray is packaged with the following libraries for accelerating machine learning workloads:

  • Tune: Scalable Hyperparameter Tuning

  • RLlib: Scalable Reinforcement Learning

  • RaySGD: Distributed Training Wrappers

Install Ray with: pip install ray. For nightly wheels, see the Installation page.

NOTE: We are deprecating Python 2 support soon.

Quick Start

Execute Python functions in parallel.

import ray
ray.init()

@ray.remote
def f(x):
    return x * x

futures = [f.remote(i) for i in range(4)]
print(ray.get(futures))

To use Ray’s actor model:

import ray
ray.init()

@ray.remote
class Counter(object):
    def __init__(self):
        self.n = 0

    def increment(self):
        self.n += 1

    def read(self):
        return self.n

counters = [Counter.remote() for i in range(4)]
[c.increment.remote() for c in counters]
futures = [c.read.remote() for c in counters]
print(ray.get(futures))

Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download this configuration file, and run:

ray submit [CLUSTER.YAML] example.py --start

Read more about launching clusters.

Tune Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/images/tune-wide.png

Tune is a library for hyperparameter tuning at any scale.

To run this example, you will need to install the following:

$ pip install ray[tune] torch torchvision filelock

This example runs a parallel grid search to train a Convolutional Neural Network using PyTorch.

import torch.optim as optim
from ray import tune
from ray.tune.examples.mnist_pytorch import (
    get_data_loaders, ConvNet, train, test)


def train_mnist(config):
    train_loader, test_loader = get_data_loaders()
    model = ConvNet()
    optimizer = optim.SGD(model.parameters(), lr=config["lr"])
    for i in range(10):
        train(model, optimizer, train_loader)
        acc = test(model, test_loader)
        tune.track.log(mean_accuracy=acc)


analysis = tune.run(
    train_mnist, config={"lr": tune.grid_search([0.001, 0.01, 0.1])})

print("Best config: ", analysis.get_best_config(metric="mean_accuracy"))

# Get a dataframe for analyzing trial results.
df = analysis.dataframe()

If TensorBoard is installed, automatically visualize all trial results:

tensorboard --logdir ~/ray_results

RLlib Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/images/rllib-wide.jpg

RLlib is an open-source library for reinforcement learning built on top of Ray that offers both high scalability and a unified API for a variety of applications.

pip install tensorflow  # or tensorflow-gpu
pip install ray[rllib]  # also recommended: ray[debug]
import gym
from gym.spaces import Discrete, Box
from ray import tune

class SimpleCorridor(gym.Env):
    def __init__(self, config):
        self.end_pos = config["corridor_length"]
        self.cur_pos = 0
        self.action_space = Discrete(2)
        self.observation_space = Box(0.0, self.end_pos, shape=(1, ))

    def reset(self):
        self.cur_pos = 0
        return [self.cur_pos]

    def step(self, action):
        if action == 0 and self.cur_pos > 0:
            self.cur_pos -= 1
        elif action == 1:
            self.cur_pos += 1
        done = self.cur_pos >= self.end_pos
        return [self.cur_pos], 1 if done else 0, done, {}

tune.run(
    "PPO",
    config={
        "env": SimpleCorridor,
        "num_workers": 4,
        "env_config": {"corridor_length": 5}})

More Information

Getting Involved

Project details


Release history Release notifications | RSS feed

This version

0.8.4

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

ray-0.8.4-cp38-cp38-manylinux1_x86_64.whl (20.2 MB view hashes)

Uploaded CPython 3.8

ray-0.8.4-cp38-cp38-macosx_10_13_x86_64.whl (48.4 MB view hashes)

Uploaded CPython 3.8 macOS 10.13+ x86-64

ray-0.8.4-cp37-cp37m-manylinux1_x86_64.whl (20.2 MB view hashes)

Uploaded CPython 3.7m

ray-0.8.4-cp37-cp37m-macosx_10_13_intel.whl (48.5 MB view hashes)

Uploaded CPython 3.7m macOS 10.13+ intel

ray-0.8.4-cp36-cp36m-manylinux1_x86_64.whl (20.2 MB view hashes)

Uploaded CPython 3.6m

ray-0.8.4-cp36-cp36m-macosx_10_13_intel.whl (48.5 MB view hashes)

Uploaded CPython 3.6m macOS 10.13+ intel

ray-0.8.4-cp35-cp35m-manylinux1_x86_64.whl (20.2 MB view hashes)

Uploaded CPython 3.5m

ray-0.8.4-cp35-cp35m-macosx_10_13_intel.whl (48.5 MB view hashes)

Uploaded CPython 3.5m macOS 10.13+ intel

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page