Skip to main content

Ray provides a simple, universal API for building distributed applications.

Project description

https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png https://readthedocs.org/projects/ray/badge/?version=master https://img.shields.io/badge/Ray-Join%20Slack-blue https://img.shields.io/badge/Discuss-Ask%20Questions-blue https://img.shields.io/twitter/follow/raydistributed.svg?style=social&logo=twitter

Ray provides a simple, universal API for building distributed applications.

Ray is packaged with the following libraries for accelerating machine learning workloads:

  • Tune: Scalable Hyperparameter Tuning

  • RLlib: Scalable Reinforcement Learning

  • Train: Distributed Deep Learning (beta)

  • Datasets: Distributed Data Loading and Compute

As well as libraries for taking ML and distributed apps to production:

  • Serve: Scalable and Programmable Serving

  • Workflows: Fast, Durable Application Flows (alpha)

There are also many community integrations with Ray, including Dask, MARS, Modin, Horovod, Hugging Face, Scikit-learn, and others. Check out the full list of Ray distributed libraries here.

Install Ray with: pip install ray. For nightly wheels, see the Installation page.

Quick Start

Execute Python functions in parallel.

import ray
ray.init()

@ray.remote
def f(x):
    return x * x

futures = [f.remote(i) for i in range(4)]
print(ray.get(futures))

To use Ray’s actor model:

import ray
ray.init()

@ray.remote
class Counter(object):
    def __init__(self):
        self.n = 0

    def increment(self):
        self.n += 1

    def read(self):
        return self.n

counters = [Counter.remote() for i in range(4)]
[c.increment.remote() for c in counters]
futures = [c.read.remote() for c in counters]
print(ray.get(futures))

Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download this configuration file, and run:

ray submit [CLUSTER.YAML] example.py --start

Read more about launching clusters.

Tune Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/images/tune-wide.png

Tune is a library for hyperparameter tuning at any scale.

To run this example, you will need to install the following:

$ pip install "ray[tune]"

This example runs a parallel grid search to optimize an example objective function.

from ray import tune


def objective(step, alpha, beta):
    return (0.1 + alpha * step / 100)**(-1) + beta * 0.1


def training_function(config):
    # Hyperparameters
    alpha, beta = config["alpha"], config["beta"]
    for step in range(10):
        # Iterative training function - can be any arbitrary training procedure.
        intermediate_score = objective(step, alpha, beta)
        # Feed the score back back to Tune.
        tune.report(mean_loss=intermediate_score)


analysis = tune.run(
    training_function,
    config={
        "alpha": tune.grid_search([0.001, 0.01, 0.1]),
        "beta": tune.choice([1, 2, 3])
    })

print("Best config: ", analysis.get_best_config(metric="mean_loss", mode="min"))

# Get a dataframe for analyzing trial results.
df = analysis.results_df

If TensorBoard is installed, automatically visualize all trial results:

tensorboard --logdir ~/ray_results

RLlib Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/rllib/images/rllib-logo.png

RLlib is an industry-grade library for reinforcement learning (RL), built on top of Ray. It offers high scalability and unified APIs for a variety of industry- and research applications.

$ pip install "ray[rllib]" tensorflow  # or torch
import gym
from ray.rllib.agents.ppo import PPOTrainer


# Define your problem using python and openAI's gym API:
class SimpleCorridor(gym.Env):
    """Corridor in which an agent must learn to move right to reach the exit.

    ---------------------
    | S | 1 | 2 | 3 | G |   S=start; G=goal; corridor_length=5
    ---------------------

    Possible actions to chose from are: 0=left; 1=right
    Observations are floats indicating the current field index, e.g. 0.0 for
    starting position, 1.0 for the field next to the starting position, etc..
    Rewards are -0.1 for all steps, except when reaching the goal (+1.0).
    """

    def __init__(self, config):
        self.end_pos = config["corridor_length"]
        self.cur_pos = 0
        self.action_space = gym.spaces.Discrete(2)  # left and right
        self.observation_space = gym.spaces.Box(0.0, self.end_pos, shape=(1,))

    def reset(self):
        """Resets the episode and returns the initial observation of the new one.
        """
        self.cur_pos = 0
        # Return initial observation.
        return [self.cur_pos]

    def step(self, action):
        """Takes a single step in the episode given `action`

        Returns:
            New observation, reward, done-flag, info-dict (empty).
        """
        # Walk left.
        if action == 0 and self.cur_pos > 0:
            self.cur_pos -= 1
        # Walk right.
        elif action == 1:
            self.cur_pos += 1
        # Set `done` flag when end of corridor (goal) reached.
        done = self.cur_pos >= self.end_pos
        # +1 when goal reached, otherwise -1.
        reward = 1.0 if done else -0.1
        return [self.cur_pos], reward, done, {}


# Create an RLlib Trainer instance.
trainer = PPOTrainer(
    config={
        # Env class to use (here: our gym.Env sub-class from above).
        "env": SimpleCorridor,
        # Config dict to be passed to our custom env's constructor.
        "env_config": {
            # Use corridor with 20 fields (including S and G).
            "corridor_length": 20
        },
        # Parallelize environment rollouts.
        "num_workers": 3,
    })

# Train for n iterations and report results (mean episode rewards).
# Since we have to move at least 19 times in the env to reach the goal and
# each move gives us -0.1 reward (except the last move at the end: +1.0),
# we can expect to reach an optimal episode reward of -0.1*18 + 1.0 = -0.8
for i in range(5):
    results = trainer.train()
    print(f"Iter: {i}; avg. reward={results['episode_reward_mean']}")

After training, you may want to perform action computations (inference) in your environment. Here is a minimal example on how to do this. Also check out our more detailed examples here (in particular for normal models, LSTMs, and attention nets).

# Perform inference (action computations) based on given env observations.
# Note that we are using a slightly different env here (len 10 instead of 20),
# however, this should still work as the agent has (hopefully) learned
# to "just always walk right!"
env = SimpleCorridor({"corridor_length": 10})
# Get the initial observation (should be: [0.0] for the starting position).
obs = env.reset()
done = False
total_reward = 0.0
# Play one episode.
while not done:
    # Compute a single action, given the current observation
    # from the environment.
    action = trainer.compute_single_action(obs)
    # Apply the computed action in the environment.
    obs, reward, done, info = env.step(action)
    # Sum up rewards for reporting purposes.
    total_reward += reward
# Report results.
print(f"Played 1 episode; total-reward={total_reward}")

Ray Serve Quick Start

https://raw.githubusercontent.com/ray-project/ray/master/doc/source/serve/logo.svg

Ray Serve is a scalable model-serving library built on Ray. It is:

  • Framework Agnostic: Use the same toolkit to serve everything from deep learning models built with frameworks like PyTorch or Tensorflow & Keras to Scikit-Learn models or arbitrary business logic.

  • Python First: Configure your model serving declaratively in pure Python, without needing YAMLs or JSON configs.

  • Performance Oriented: Turn on batching, pipelining, and GPU acceleration to increase the throughput of your model.

  • Composition Native: Allow you to create “model pipelines” by composing multiple models together to drive a single prediction.

  • Horizontally Scalable: Serve can linearly scale as you add more machines. Enable your ML-powered service to handle growing traffic.

To run this example, you will need to install the following:

$ pip install scikit-learn
$ pip install "ray[serve]"

This example runs serves a scikit-learn gradient boosting classifier.

import pickle
import requests

from sklearn.datasets import load_iris
from sklearn.ensemble import GradientBoostingClassifier

from ray import serve

serve.start()

# Train model.
iris_dataset = load_iris()
model = GradientBoostingClassifier()
model.fit(iris_dataset["data"], iris_dataset["target"])

@serve.deployment(route_prefix="/iris")
class BoostingModel:
    def __init__(self, model):
        self.model = model
        self.label_list = iris_dataset["target_names"].tolist()

    async def __call__(self, request):
        payload = (await request.json())["vector"]
        print(f"Received flask request with data {payload}")

        prediction = self.model.predict([payload])[0]
        human_name = self.label_list[prediction]
        return {"result": human_name}


# Deploy model.
BoostingModel.deploy(model)

# Query it!
sample_request_input = {"vector": [1.2, 1.0, 1.1, 0.9]}
response = requests.get("http://localhost:8000/iris", json=sample_request_input)
print(response.text)
# Result:
# {
#  "result": "versicolor"
# }

More Information

Older documents:

Getting Involved

Platform

Purpose

Estimated Response Time

Support Level

Discourse Forum

For discussions about development and questions about usage.

< 1 day

Community

GitHub Issues

For reporting bugs and filing feature requests.

< 2 days

Ray OSS Team

Slack

For collaborating with other Ray users.

< 2 days

Community

StackOverflow

For asking questions about how to use Ray.

3-5 days

Community

Meetup Group

For learning about Ray projects and best practices.

Monthly

Ray DevRel

Twitter

For staying up-to-date on new features.

Daily

Ray DevRel

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

ray-1.13.0-cp310-cp310-manylinux2014_x86_64.whl (54.3 MB view details)

Uploaded CPython 3.10

ray-1.13.0-cp310-cp310-macosx_12_0_arm64.whl (25.9 MB view details)

Uploaded CPython 3.10macOS 12.0+ ARM64

ray-1.13.0-cp310-cp310-macosx_10_15_universal2.whl (56.9 MB view details)

Uploaded CPython 3.10macOS 10.15+ universal2 (ARM64, x86-64)

ray-1.13.0-cp39-cp39-win_amd64.whl (20.3 MB view details)

Uploaded CPython 3.9Windows x86-64

ray-1.13.0-cp39-cp39-manylinux2014_x86_64.whl (54.3 MB view details)

Uploaded CPython 3.9

ray-1.13.0-cp39-cp39-macosx_12_0_arm64.whl (26.0 MB view details)

Uploaded CPython 3.9macOS 12.0+ ARM64

ray-1.13.0-cp39-cp39-macosx_10_15_x86_64.whl (56.9 MB view details)

Uploaded CPython 3.9macOS 10.15+ x86-64

ray-1.13.0-cp38-cp38-win_amd64.whl (20.3 MB view details)

Uploaded CPython 3.8Windows x86-64

ray-1.13.0-cp38-cp38-manylinux2014_x86_64.whl (54.3 MB view details)

Uploaded CPython 3.8

ray-1.13.0-cp38-cp38-macosx_12_0_arm64.whl (25.9 MB view details)

Uploaded CPython 3.8macOS 12.0+ ARM64

ray-1.13.0-cp38-cp38-macosx_10_15_x86_64.whl (56.9 MB view details)

Uploaded CPython 3.8macOS 10.15+ x86-64

ray-1.13.0-cp37-cp37m-win_amd64.whl (20.4 MB view details)

Uploaded CPython 3.7mWindows x86-64

ray-1.13.0-cp37-cp37m-manylinux2014_x86_64.whl (54.5 MB view details)

Uploaded CPython 3.7m

ray-1.13.0-cp37-cp37m-macosx_10_15_intel.whl (57.1 MB view details)

Uploaded CPython 3.7mmacOS 10.15+ Intel (x86-64, i386)

ray-1.13.0-cp36-cp36m-manylinux2014_x86_64.whl (54.5 MB view details)

Uploaded CPython 3.6m

ray-1.13.0-cp36-cp36m-macosx_10_15_intel.whl (57.1 MB view details)

Uploaded CPython 3.6mmacOS 10.15+ Intel (x86-64, i386)

File details

Details for the file ray-1.13.0-cp310-cp310-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray-1.13.0-cp310-cp310-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 57a38df76f00fdb19fd210223feb15f7246beda1a3121f14dafbc28d2dba2159
MD5 1183cc3d6a2202e7bbe505e48011328a
BLAKE2b-256 01c56cf736a2916d9570b0c195653375251be12a2de7c306f66504a753182402

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp310-cp310-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for ray-1.13.0-cp310-cp310-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 a7c0a60a84f5755994e2f3585430858971fe80c59a4b48a8d69150df44d27466
MD5 6efe12a7dc519ef94326a8b347360b74
BLAKE2b-256 41344c29dcdb5092968782b09dea00e3720ae56affd71dd5df7f7f67fe293286

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp310-cp310-macosx_10_15_universal2.whl.

File metadata

File hashes

Hashes for ray-1.13.0-cp310-cp310-macosx_10_15_universal2.whl
Algorithm Hash digest
SHA256 4e2bb8cf0de825f120c53b3b762f76688eba47e87e0f59b5038d2721e670c38c
MD5 b3d493a687b00c518981a9e9e1f88892
BLAKE2b-256 1b0e2306b7f7d167f5e57fedce3b4b73ee4fc18f515bac550727669e34282bb4

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: ray-1.13.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 20.3 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.10

File hashes

Hashes for ray-1.13.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 6586fd313e0665f1d0e1fc6deca910421d1a963c56711383f89931e0521942fd
MD5 603b386fc6f90122bb92136a6e5e6b8a
BLAKE2b-256 c902855ca2194b8000471bf860e0862d2a7f2212e9562698fcad99f1404351db

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp39-cp39-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray-1.13.0-cp39-cp39-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ba0829d495dec0bc90a72169f93e4cb004962a37df4d9b0df23004ebefb6d221
MD5 6ec47a9e4a3d1348237ddf426c290ca7
BLAKE2b-256 6fbfe780f0b313eee2232573f917284e16661629e66b7e1474e23f7f5974102a

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp39-cp39-macosx_12_0_arm64.whl.

File metadata

  • Download URL: ray-1.13.0-cp39-cp39-macosx_12_0_arm64.whl
  • Upload date:
  • Size: 26.0 MB
  • Tags: CPython 3.9, macOS 12.0+ ARM64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.10

File hashes

Hashes for ray-1.13.0-cp39-cp39-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 6d39a744e1e4179bd718ee8c1b21a399178601c06e301d209448793e8fcacdc8
MD5 edd9589dd5b7b5225dbc7961861813b4
BLAKE2b-256 378513124b405511d2afb2068b2f9085391585d08fbf7f22db7004f25818c55e

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp39-cp39-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for ray-1.13.0-cp39-cp39-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 dabe8f5af7f83d80b3f10fe78f6e3cd061d38cc0c384cd3bef275849e57bad6b
MD5 0cd46d47b43ec661416324f910a9d405
BLAKE2b-256 db1ebf8ef7a2bb76f922ac079d89c81eb93db81dc3a6de4686d802a17e1adfe8

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: ray-1.13.0-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 20.3 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.10

File hashes

Hashes for ray-1.13.0-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 6618d815bf80219cfe687e3b27a974a224506d65ea8f38ad03e8687a869d6e00
MD5 9fb1b624fe85f3831b3826fb3011bb2a
BLAKE2b-256 82a5c89172efc180a6a52364b7863dfcdec1aad976243a3b6dc6cb3c1bc018b7

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp38-cp38-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray-1.13.0-cp38-cp38-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 a7edf2cb0f2d6b1a4f82fa86caf524280ed505959599831e88d94cbd5c0109c3
MD5 53345283615831e31cd1ce22abfd2eff
BLAKE2b-256 021d38477da4309dba597dea93194f83e07536e8c59513b8ce3245acbbb0bc06

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp38-cp38-macosx_12_0_arm64.whl.

File metadata

  • Download URL: ray-1.13.0-cp38-cp38-macosx_12_0_arm64.whl
  • Upload date:
  • Size: 25.9 MB
  • Tags: CPython 3.8, macOS 12.0+ ARM64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.10

File hashes

Hashes for ray-1.13.0-cp38-cp38-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 da8f488de5cb107769b1f57653a31dc558de3fe56d279a0020bc7c02fb957a89
MD5 6d7ca7f217d54dcc7d23a6e1ddcad53c
BLAKE2b-256 0d78cc48d3bf37a45430481ee51731f81942aa8ec9ea4b6c7e738229546a1ee7

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp38-cp38-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for ray-1.13.0-cp38-cp38-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 7a82ef0e439c043f9de96415304867716f2ea5207ecf3b5ae5cdb1c4b6dbb596
MD5 8d0b909ecbce334ec6c2b0fffcf5b571
BLAKE2b-256 874f78960739eb2a7a9ef344b25e1edb9700b3d2a35bb61a0346472a08cf7f22

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: ray-1.13.0-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 20.4 MB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.10

File hashes

Hashes for ray-1.13.0-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 4c2568b3600d8246964ad21e90b3dfee31074b5ba534759cdc42dbd783f40476
MD5 2072a53ff9618f6bdd37036bdb72d8f3
BLAKE2b-256 f9fdf4e49edfb112cdcf6a23f09f9e7f673a0147f9fde83c56b0fec38650046f

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp37-cp37m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray-1.13.0-cp37-cp37m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 df7394a860a2e81e7017e4e8ade6dc6eecf4d8effa26d0e9ca74eb1a1354d0c4
MD5 948b467dace310debd537c33629e9c5a
BLAKE2b-256 b45ff54c06740cc7ca6a5982ba8289bf6835891f92cedad1cbe5541a18bae298

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp37-cp37m-macosx_10_15_intel.whl.

File metadata

  • Download URL: ray-1.13.0-cp37-cp37m-macosx_10_15_intel.whl
  • Upload date:
  • Size: 57.1 MB
  • Tags: CPython 3.7m, macOS 10.15+ Intel (x86-64, i386)
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.10

File hashes

Hashes for ray-1.13.0-cp37-cp37m-macosx_10_15_intel.whl
Algorithm Hash digest
SHA256 4e70829395fa471f013d5008d827ad9226cfb4f3a9689dad30bb54967f6601b6
MD5 879e5d5665182b393a056190f4bc7594
BLAKE2b-256 26c22ad5a44f2b99b1721254bfe28e3aa52c1b237be5e20604d842e56b6cd4f8

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp36-cp36m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray-1.13.0-cp36-cp36m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 aeb35708214f9dd29010576ae594c364dc868d061fc01fb969966c60708f72d6
MD5 3ecf15687dee8bac873bad4232ceddad
BLAKE2b-256 ad659c8d736246c0e66ee283751709ad5a8cd9dc53708f577c75c9905312c124

See more details on using hashes here.

File details

Details for the file ray-1.13.0-cp36-cp36m-macosx_10_15_intel.whl.

File metadata

  • Download URL: ray-1.13.0-cp36-cp36m-macosx_10_15_intel.whl
  • Upload date:
  • Size: 57.1 MB
  • Tags: CPython 3.6m, macOS 10.15+ Intel (x86-64, i386)
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.10

File hashes

Hashes for ray-1.13.0-cp36-cp36m-macosx_10_15_intel.whl
Algorithm Hash digest
SHA256 9df2e037e220180cf4e1a2e0d0ec9b6a22acb7e28c790fc07f49ec9707f4ba87
MD5 f5a1f976382caf0c800392b08126c389
BLAKE2b-256 6a3a7e0084df93f5d5b5030a1bd92b73deea19199c7034c7c897550b52bf4d00

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page