Skip to main content

Ray provides a simple, universal API for building distributed applications.

Project description

https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png https://readthedocs.org/projects/ray/badge/?version=master https://img.shields.io/badge/Ray-Join%20Slack-blue https://img.shields.io/badge/Discuss-Ask%20Questions-blue https://img.shields.io/twitter/follow/raydistributed.svg?style=social&logo=twitter

Ray provides a simple, universal API for building distributed applications.

Ray is packaged with the following libraries for accelerating machine learning workloads:

  • Tune: Scalable Hyperparameter Tuning

  • RLlib: Scalable Reinforcement Learning

  • Train: Distributed Deep Learning (beta)

  • Datasets: Distributed Data Loading and Compute

As well as libraries for taking ML and distributed apps to production:

  • Serve: Scalable and Programmable Serving

  • Workflows: Fast, Durable Application Flows (alpha)

There are also many community integrations with Ray, including Dask, MARS, Modin, Horovod, Hugging Face, Scikit-learn, and others. Check out the full list of Ray distributed libraries here.

Install Ray with: pip install ray. For nightly wheels, see the Installation page.

Quick Start

Execute Python functions in parallel.

import ray
ray.init()

@ray.remote
def f(x):
    return x * x

futures = [f.remote(i) for i in range(4)]
print(ray.get(futures))

To use Ray’s actor model:

import ray
ray.init()

@ray.remote
class Counter(object):
    def __init__(self):
        self.n = 0

    def increment(self):
        self.n += 1

    def read(self):
        return self.n

counters = [Counter.remote() for i in range(4)]
[c.increment.remote() for c in counters]
futures = [c.read.remote() for c in counters]
print(ray.get(futures))

Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download this configuration file, and run:

ray submit [CLUSTER.YAML] example.py --start

Read more about launching clusters.

Tune Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/images/tune-wide.png

Tune is a library for hyperparameter tuning at any scale.

To run this example, you will need to install the following:

$ pip install "ray[tune]"

This example runs a parallel grid search to optimize an example objective function.

from ray import tune


def objective(step, alpha, beta):
    return (0.1 + alpha * step / 100)**(-1) + beta * 0.1


def training_function(config):
    # Hyperparameters
    alpha, beta = config["alpha"], config["beta"]
    for step in range(10):
        # Iterative training function - can be any arbitrary training procedure.
        intermediate_score = objective(step, alpha, beta)
        # Feed the score back back to Tune.
        tune.report(mean_loss=intermediate_score)


analysis = tune.run(
    training_function,
    config={
        "alpha": tune.grid_search([0.001, 0.01, 0.1]),
        "beta": tune.choice([1, 2, 3])
    })

print("Best config: ", analysis.get_best_config(metric="mean_loss", mode="min"))

# Get a dataframe for analyzing trial results.
df = analysis.results_df

If TensorBoard is installed, automatically visualize all trial results:

tensorboard --logdir ~/ray_results

RLlib Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/rllib/images/rllib-logo.png

RLlib is an industry-grade library for reinforcement learning (RL), built on top of Ray. It offers high scalability and unified APIs for a variety of industry- and research applications.

$ pip install "ray[rllib]" tensorflow  # or torch
import gym
from ray.rllib.agents.ppo import PPOTrainer


# Define your problem using python and openAI's gym API:
class SimpleCorridor(gym.Env):
    """Corridor in which an agent must learn to move right to reach the exit.

    ---------------------
    | S | 1 | 2 | 3 | G |   S=start; G=goal; corridor_length=5
    ---------------------

    Possible actions to chose from are: 0=left; 1=right
    Observations are floats indicating the current field index, e.g. 0.0 for
    starting position, 1.0 for the field next to the starting position, etc..
    Rewards are -0.1 for all steps, except when reaching the goal (+1.0).
    """

    def __init__(self, config):
        self.end_pos = config["corridor_length"]
        self.cur_pos = 0
        self.action_space = gym.spaces.Discrete(2)  # left and right
        self.observation_space = gym.spaces.Box(0.0, self.end_pos, shape=(1,))

    def reset(self):
        """Resets the episode and returns the initial observation of the new one.
        """
        self.cur_pos = 0
        # Return initial observation.
        return [self.cur_pos]

    def step(self, action):
        """Takes a single step in the episode given `action`

        Returns:
            New observation, reward, done-flag, info-dict (empty).
        """
        # Walk left.
        if action == 0 and self.cur_pos > 0:
            self.cur_pos -= 1
        # Walk right.
        elif action == 1:
            self.cur_pos += 1
        # Set `done` flag when end of corridor (goal) reached.
        done = self.cur_pos >= self.end_pos
        # +1 when goal reached, otherwise -1.
        reward = 1.0 if done else -0.1
        return [self.cur_pos], reward, done, {}


# Create an RLlib Trainer instance.
trainer = PPOTrainer(
    config={
        # Env class to use (here: our gym.Env sub-class from above).
        "env": SimpleCorridor,
        # Config dict to be passed to our custom env's constructor.
        "env_config": {
            # Use corridor with 20 fields (including S and G).
            "corridor_length": 20
        },
        # Parallelize environment rollouts.
        "num_workers": 3,
    })

# Train for n iterations and report results (mean episode rewards).
# Since we have to move at least 19 times in the env to reach the goal and
# each move gives us -0.1 reward (except the last move at the end: +1.0),
# we can expect to reach an optimal episode reward of -0.1*18 + 1.0 = -0.8
for i in range(5):
    results = trainer.train()
    print(f"Iter: {i}; avg. reward={results['episode_reward_mean']}")

After training, you may want to perform action computations (inference) in your environment. Here is a minimal example on how to do this. Also check out our more detailed examples here (in particular for normal models, LSTMs, and attention nets).

# Perform inference (action computations) based on given env observations.
# Note that we are using a slightly different env here (len 10 instead of 20),
# however, this should still work as the agent has (hopefully) learned
# to "just always walk right!"
env = SimpleCorridor({"corridor_length": 10})
# Get the initial observation (should be: [0.0] for the starting position).
obs = env.reset()
done = False
total_reward = 0.0
# Play one episode.
while not done:
    # Compute a single action, given the current observation
    # from the environment.
    action = trainer.compute_single_action(obs)
    # Apply the computed action in the environment.
    obs, reward, done, info = env.step(action)
    # Sum up rewards for reporting purposes.
    total_reward += reward
# Report results.
print(f"Played 1 episode; total-reward={total_reward}")

Ray Serve Quick Start

https://raw.githubusercontent.com/ray-project/ray/master/doc/source/serve/logo.svg

Ray Serve is a scalable model-serving library built on Ray. It is:

  • Framework Agnostic: Use the same toolkit to serve everything from deep learning models built with frameworks like PyTorch or Tensorflow & Keras to Scikit-Learn models or arbitrary business logic.

  • Python First: Configure your model serving declaratively in pure Python, without needing YAMLs or JSON configs.

  • Performance Oriented: Turn on batching, pipelining, and GPU acceleration to increase the throughput of your model.

  • Composition Native: Allow you to create “model pipelines” by composing multiple models together to drive a single prediction.

  • Horizontally Scalable: Serve can linearly scale as you add more machines. Enable your ML-powered service to handle growing traffic.

To run this example, you will need to install the following:

$ pip install scikit-learn
$ pip install "ray[serve]"

This example runs serves a scikit-learn gradient boosting classifier.

import pickle
import requests

from sklearn.datasets import load_iris
from sklearn.ensemble import GradientBoostingClassifier

from ray import serve

serve.start()

# Train model.
iris_dataset = load_iris()
model = GradientBoostingClassifier()
model.fit(iris_dataset["data"], iris_dataset["target"])

@serve.deployment(route_prefix="/iris")
class BoostingModel:
    def __init__(self, model):
        self.model = model
        self.label_list = iris_dataset["target_names"].tolist()

    async def __call__(self, request):
        payload = await request.json()["vector"]
        print(f"Received flask request with data {payload}")

        prediction = self.model.predict([payload])[0]
        human_name = self.label_list[prediction]
        return {"result": human_name}


# Deploy model.
BoostingModel.deploy(model)

# Query it!
sample_request_input = {"vector": [1.2, 1.0, 1.1, 0.9]}
response = requests.get("http://localhost:8000/iris", json=sample_request_input)
print(response.text)
# Result:
# {
#  "result": "versicolor"
# }

More Information

Older documents:

Getting Involved

  • Forum: For discussions about development, questions about usage, and feature requests.

  • GitHub Issues: For reporting bugs.

  • Twitter: Follow updates on Twitter.

  • Slack: Join our Slack channel.

  • Meetup Group: Join our meetup group.

  • StackOverflow: For questions about how to use Ray.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

ray-1.12.1-cp39-cp39-win_amd64.whl (19.8 MB view details)

Uploaded CPython 3.9Windows x86-64

ray-1.12.1-cp39-cp39-manylinux2014_x86_64.whl (52.9 MB view details)

Uploaded CPython 3.9

ray-1.12.1-cp39-cp39-macosx_12_0_arm64.whl (25.2 MB view details)

Uploaded CPython 3.9macOS 12.0+ ARM64

ray-1.12.1-cp39-cp39-macosx_10_15_x86_64.whl (55.8 MB view details)

Uploaded CPython 3.9macOS 10.15+ x86-64

ray-1.12.1-cp38-cp38-win_amd64.whl (19.8 MB view details)

Uploaded CPython 3.8Windows x86-64

ray-1.12.1-cp38-cp38-manylinux2014_x86_64.whl (52.9 MB view details)

Uploaded CPython 3.8

ray-1.12.1-cp38-cp38-macosx_12_0_arm64.whl (25.2 MB view details)

Uploaded CPython 3.8macOS 12.0+ ARM64

ray-1.12.1-cp38-cp38-macosx_10_15_x86_64.whl (55.8 MB view details)

Uploaded CPython 3.8macOS 10.15+ x86-64

ray-1.12.1-cp37-cp37m-win_amd64.whl (19.9 MB view details)

Uploaded CPython 3.7mWindows x86-64

ray-1.12.1-cp37-cp37m-manylinux2014_x86_64.whl (53.2 MB view details)

Uploaded CPython 3.7m

ray-1.12.1-cp37-cp37m-macosx_10_15_intel.whl (56.0 MB view details)

Uploaded CPython 3.7mmacOS 10.15+ Intel (x86-64, i386)

ray-1.12.1-cp36-cp36m-win_amd64.whl (20.0 MB view details)

Uploaded CPython 3.6mWindows x86-64

ray-1.12.1-cp36-cp36m-manylinux2014_x86_64.whl (53.2 MB view details)

Uploaded CPython 3.6m

ray-1.12.1-cp36-cp36m-macosx_10_15_intel.whl (56.0 MB view details)

Uploaded CPython 3.6mmacOS 10.15+ Intel (x86-64, i386)

File details

Details for the file ray-1.12.1-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: ray-1.12.1-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 19.8 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.6

File hashes

Hashes for ray-1.12.1-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 8b4c7051170d39b566f5b13129f3a72d219664d780abd71b4eb5f81cdf705d57
MD5 419a505cd980067c1c0897a1600aaadf
BLAKE2b-256 999264d21284a56c9cfc72de51fb5d4b9e3d5b4fbb04bf0e4679594ceeee14ea

See more details on using hashes here.

File details

Details for the file ray-1.12.1-cp39-cp39-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray-1.12.1-cp39-cp39-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f546a4f56828a2fc21b605f8ac448dc29aaa120bb759fb00c090974638ae0e20
MD5 1d20b108311120158220d3ffd83eb205
BLAKE2b-256 c7530ddbbe11f175d321a67d95b08fa4500b13c900b47a2305bdd1db44286cd5

See more details on using hashes here.

File details

Details for the file ray-1.12.1-cp39-cp39-macosx_12_0_arm64.whl.

File metadata

  • Download URL: ray-1.12.1-cp39-cp39-macosx_12_0_arm64.whl
  • Upload date:
  • Size: 25.2 MB
  • Tags: CPython 3.9, macOS 12.0+ ARM64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.6

File hashes

Hashes for ray-1.12.1-cp39-cp39-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 96fa369f96cac087f7d741e8327ac14477bd65280ad645b40632b00f23968560
MD5 62a7cf796d615b569b852c556464a142
BLAKE2b-256 808ec16432e58c5788f962f12789507d0028338e88cba57678933001f93544ec

See more details on using hashes here.

File details

Details for the file ray-1.12.1-cp39-cp39-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for ray-1.12.1-cp39-cp39-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 2cec0e468296811906b82d7787d5d809646f465343e9842ccaf2e8da11e1f08c
MD5 5e10967ce363070fd15dc0c2be7c0571
BLAKE2b-256 17a40949c91dbe54f3c5f1868b1fc145056c30d4031f77e131eb3027bd31b6f8

See more details on using hashes here.

File details

Details for the file ray-1.12.1-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: ray-1.12.1-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 19.8 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.6

File hashes

Hashes for ray-1.12.1-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 45402dc41b251682d6d37c54370e8a1795fffaaaf1c95a36c93ec43d2ceef5e9
MD5 7733eb0fe631b29616ad879c57a567c5
BLAKE2b-256 49d94226c2072d104aa8b975fba1695197c9d3e9fdda96789ba6e6ac7f27e076

See more details on using hashes here.

File details

Details for the file ray-1.12.1-cp38-cp38-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray-1.12.1-cp38-cp38-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 95eead495961e3fc47fd1a2c9b8528a41cf53c8141a04e4aa6d49ae0416ffcdd
MD5 81b04e5683b18cdb60af6f07462c39a2
BLAKE2b-256 48d24980315a5133202132f0d7f1a24d2bb50952282c83bc3d8061e831bf77cf

See more details on using hashes here.

File details

Details for the file ray-1.12.1-cp38-cp38-macosx_12_0_arm64.whl.

File metadata

  • Download URL: ray-1.12.1-cp38-cp38-macosx_12_0_arm64.whl
  • Upload date:
  • Size: 25.2 MB
  • Tags: CPython 3.8, macOS 12.0+ ARM64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.6

File hashes

Hashes for ray-1.12.1-cp38-cp38-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 d23590a695ea4dc58ba358a4ddcc61a222502082a61bb011e929836f834a4bb8
MD5 0bba35ee0fedb3f0e1d03fb1c07422b8
BLAKE2b-256 4a0702f996d0d80632fecdf039a0a6d9cf90ac2cce67c388abb188daea0c2898

See more details on using hashes here.

File details

Details for the file ray-1.12.1-cp38-cp38-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for ray-1.12.1-cp38-cp38-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 578a9da60c4e981dc8b1bc6058ed0355f00ae77ad3be3fa271d74ad39d6cd3c7
MD5 888e9abc099be417e20e32544d9c617e
BLAKE2b-256 b5e240542e46f3f765b9e845087f11fe20d596ffbe081e826dcb8129a28d8a92

See more details on using hashes here.

File details

Details for the file ray-1.12.1-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: ray-1.12.1-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 19.9 MB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.6

File hashes

Hashes for ray-1.12.1-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 6aac545c35823f2693ca412ed1e706f10211926ff2e01ea64cd0b2b214dd1cd3
MD5 645d913cd654564906e24fb132e38550
BLAKE2b-256 b46d37ddaf21cb50e93932919dd68dc965c9975bddf78802d156551d6a3db3f1

See more details on using hashes here.

File details

Details for the file ray-1.12.1-cp37-cp37m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray-1.12.1-cp37-cp37m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e429012fa60f4562d650accd11b21fa4b752efc28ab205305be5e0e49d9e4d45
MD5 885ac34f7161b4d0c51049d2537e365f
BLAKE2b-256 f3dc568707a9fafe3d348590588575031dcd99a2f604f7dbffafb60ee1bc2489

See more details on using hashes here.

File details

Details for the file ray-1.12.1-cp37-cp37m-macosx_10_15_intel.whl.

File metadata

  • Download URL: ray-1.12.1-cp37-cp37m-macosx_10_15_intel.whl
  • Upload date:
  • Size: 56.0 MB
  • Tags: CPython 3.7m, macOS 10.15+ Intel (x86-64, i386)
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.6

File hashes

Hashes for ray-1.12.1-cp37-cp37m-macosx_10_15_intel.whl
Algorithm Hash digest
SHA256 cc3fe87403c2ab3477e96e49c56e4d98c7cb54677370a0a5ba0e20c15fc76cd7
MD5 5ba9372e5fa11965567fb6b81893401e
BLAKE2b-256 7d976c10a5f479c74733478a2991208b058e2f2e2a422bdd5fdef2715d26e88d

See more details on using hashes here.

File details

Details for the file ray-1.12.1-cp36-cp36m-win_amd64.whl.

File metadata

  • Download URL: ray-1.12.1-cp36-cp36m-win_amd64.whl
  • Upload date:
  • Size: 20.0 MB
  • Tags: CPython 3.6m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.6

File hashes

Hashes for ray-1.12.1-cp36-cp36m-win_amd64.whl
Algorithm Hash digest
SHA256 a8775e8c6f3669f28da24dc534c65dc42a0dba3279288af95d6d37574f241850
MD5 c837b766ed9ce84441ff6f4378fbaf1c
BLAKE2b-256 f9edc8c608ff6d07096295761cf8eab9778941451ead3cad3b73c1b87f2166b0

See more details on using hashes here.

File details

Details for the file ray-1.12.1-cp36-cp36m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray-1.12.1-cp36-cp36m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 8bdb6686587e2a1a6f5aabea3027c2449dba9375911147a70acd98dc01373e76
MD5 15b5446855f8e706fc7d1ab957d7c3b0
BLAKE2b-256 ad3aebc3a3c9453acd209db7342beab91e02d0e6b05bf7a6cd9d6fe708c66445

See more details on using hashes here.

File details

Details for the file ray-1.12.1-cp36-cp36m-macosx_10_15_intel.whl.

File metadata

  • Download URL: ray-1.12.1-cp36-cp36m-macosx_10_15_intel.whl
  • Upload date:
  • Size: 56.0 MB
  • Tags: CPython 3.6m, macOS 10.15+ Intel (x86-64, i386)
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.6

File hashes

Hashes for ray-1.12.1-cp36-cp36m-macosx_10_15_intel.whl
Algorithm Hash digest
SHA256 f484f919fadfde40a09bc7e96566da5dbb03e1bf5175f492c8e0faa981660c1b
MD5 8cc0e2f66b51c2e7a9f50b510a9a8898
BLAKE2b-256 e71aa9610abc4d7849fce2d4d62ef24836f9efe6b4f7f2385382f9b63b847b08

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page