Skip to main content

A subpackage of Ray which provides the Ray C++ API.

Project description

https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png https://readthedocs.org/projects/ray/badge/?version=master https://img.shields.io/badge/Ray-Join%20Slack-blue https://img.shields.io/badge/Discuss-Ask%20Questions-blue https://img.shields.io/twitter/follow/raydistributed.svg?style=social&logo=twitter

Ray provides a simple, universal API for building distributed applications.

Ray is packaged with the following libraries for accelerating machine learning workloads:

  • Tune: Scalable Hyperparameter Tuning

  • RLlib: Scalable Reinforcement Learning

  • Train: Distributed Deep Learning (beta)

  • Datasets: Distributed Data Loading and Compute

As well as libraries for taking ML and distributed apps to production:

  • Serve: Scalable and Programmable Serving

  • Workflows: Fast, Durable Application Flows (alpha)

There are also many community integrations with Ray, including Dask, MARS, Modin, Horovod, Hugging Face, Scikit-learn, and others. Check out the full list of Ray distributed libraries here.

Install Ray with: pip install ray. For nightly wheels, see the Installation page.

Quick Start

Execute Python functions in parallel.

import ray
ray.init()

@ray.remote
def f(x):
    return x * x

futures = [f.remote(i) for i in range(4)]
print(ray.get(futures))

To use Ray’s actor model:

import ray
ray.init()

@ray.remote
class Counter(object):
    def __init__(self):
        self.n = 0

    def increment(self):
        self.n += 1

    def read(self):
        return self.n

counters = [Counter.remote() for i in range(4)]
[c.increment.remote() for c in counters]
futures = [c.read.remote() for c in counters]
print(ray.get(futures))

Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download this configuration file, and run:

ray submit [CLUSTER.YAML] example.py --start

Read more about launching clusters.

Tune Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/images/tune-wide.png

Tune is a library for hyperparameter tuning at any scale.

To run this example, you will need to install the following:

$ pip install "ray[tune]"

This example runs a parallel grid search to optimize an example objective function.

from ray import tune


def objective(step, alpha, beta):
    return (0.1 + alpha * step / 100)**(-1) + beta * 0.1


def training_function(config):
    # Hyperparameters
    alpha, beta = config["alpha"], config["beta"]
    for step in range(10):
        # Iterative training function - can be any arbitrary training procedure.
        intermediate_score = objective(step, alpha, beta)
        # Feed the score back back to Tune.
        tune.report(mean_loss=intermediate_score)


analysis = tune.run(
    training_function,
    config={
        "alpha": tune.grid_search([0.001, 0.01, 0.1]),
        "beta": tune.choice([1, 2, 3])
    })

print("Best config: ", analysis.get_best_config(metric="mean_loss", mode="min"))

# Get a dataframe for analyzing trial results.
df = analysis.results_df

If TensorBoard is installed, automatically visualize all trial results:

tensorboard --logdir ~/ray_results

RLlib Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/rllib/images/rllib-logo.png

RLlib is an industry-grade library for reinforcement learning (RL), built on top of Ray. It offers high scalability and unified APIs for a variety of industry- and research applications.

$ pip install "ray[rllib]" tensorflow  # or torch
import gym
from ray.rllib.agents.ppo import PPOTrainer


# Define your problem using python and openAI's gym API:
class SimpleCorridor(gym.Env):
    """Corridor in which an agent must learn to move right to reach the exit.

    ---------------------
    | S | 1 | 2 | 3 | G |   S=start; G=goal; corridor_length=5
    ---------------------

    Possible actions to chose from are: 0=left; 1=right
    Observations are floats indicating the current field index, e.g. 0.0 for
    starting position, 1.0 for the field next to the starting position, etc..
    Rewards are -0.1 for all steps, except when reaching the goal (+1.0).
    """

    def __init__(self, config):
        self.end_pos = config["corridor_length"]
        self.cur_pos = 0
        self.action_space = gym.spaces.Discrete(2)  # left and right
        self.observation_space = gym.spaces.Box(0.0, self.end_pos, shape=(1,))

    def reset(self):
        """Resets the episode and returns the initial observation of the new one.
        """
        self.cur_pos = 0
        # Return initial observation.
        return [self.cur_pos]

    def step(self, action):
        """Takes a single step in the episode given `action`

        Returns:
            New observation, reward, done-flag, info-dict (empty).
        """
        # Walk left.
        if action == 0 and self.cur_pos > 0:
            self.cur_pos -= 1
        # Walk right.
        elif action == 1:
            self.cur_pos += 1
        # Set `done` flag when end of corridor (goal) reached.
        done = self.cur_pos >= self.end_pos
        # +1 when goal reached, otherwise -1.
        reward = 1.0 if done else -0.1
        return [self.cur_pos], reward, done, {}


# Create an RLlib Trainer instance.
trainer = PPOTrainer(
    config={
        # Env class to use (here: our gym.Env sub-class from above).
        "env": SimpleCorridor,
        # Config dict to be passed to our custom env's constructor.
        "env_config": {
            # Use corridor with 20 fields (including S and G).
            "corridor_length": 20
        },
        # Parallelize environment rollouts.
        "num_workers": 3,
    })

# Train for n iterations and report results (mean episode rewards).
# Since we have to move at least 19 times in the env to reach the goal and
# each move gives us -0.1 reward (except the last move at the end: +1.0),
# we can expect to reach an optimal episode reward of -0.1*18 + 1.0 = -0.8
for i in range(5):
    results = trainer.train()
    print(f"Iter: {i}; avg. reward={results['episode_reward_mean']}")

After training, you may want to perform action computations (inference) in your environment. Here is a minimal example on how to do this. Also check out our more detailed examples here (in particular for normal models, LSTMs, and attention nets).

# Perform inference (action computations) based on given env observations.
# Note that we are using a slightly different env here (len 10 instead of 20),
# however, this should still work as the agent has (hopefully) learned
# to "just always walk right!"
env = SimpleCorridor({"corridor_length": 10})
# Get the initial observation (should be: [0.0] for the starting position).
obs = env.reset()
done = False
total_reward = 0.0
# Play one episode.
while not done:
    # Compute a single action, given the current observation
    # from the environment.
    action = trainer.compute_single_action(obs)
    # Apply the computed action in the environment.
    obs, reward, done, info = env.step(action)
    # Sum up rewards for reporting purposes.
    total_reward += reward
# Report results.
print(f"Played 1 episode; total-reward={total_reward}")

Ray Serve Quick Start

https://raw.githubusercontent.com/ray-project/ray/master/doc/source/serve/logo.svg

Ray Serve is a scalable model-serving library built on Ray. It is:

  • Framework Agnostic: Use the same toolkit to serve everything from deep learning models built with frameworks like PyTorch or Tensorflow & Keras to Scikit-Learn models or arbitrary business logic.

  • Python First: Configure your model serving declaratively in pure Python, without needing YAMLs or JSON configs.

  • Performance Oriented: Turn on batching, pipelining, and GPU acceleration to increase the throughput of your model.

  • Composition Native: Allow you to create “model pipelines” by composing multiple models together to drive a single prediction.

  • Horizontally Scalable: Serve can linearly scale as you add more machines. Enable your ML-powered service to handle growing traffic.

To run this example, you will need to install the following:

$ pip install scikit-learn
$ pip install "ray[serve]"

This example runs serves a scikit-learn gradient boosting classifier.

import pickle
import requests

from sklearn.datasets import load_iris
from sklearn.ensemble import GradientBoostingClassifier

from ray import serve

serve.start()

# Train model.
iris_dataset = load_iris()
model = GradientBoostingClassifier()
model.fit(iris_dataset["data"], iris_dataset["target"])

@serve.deployment(route_prefix="/iris")
class BoostingModel:
    def __init__(self, model):
        self.model = model
        self.label_list = iris_dataset["target_names"].tolist()

    async def __call__(self, request):
        payload = await request.json()["vector"]
        print(f"Received flask request with data {payload}")

        prediction = self.model.predict([payload])[0]
        human_name = self.label_list[prediction]
        return {"result": human_name}


# Deploy model.
BoostingModel.deploy(model)

# Query it!
sample_request_input = {"vector": [1.2, 1.0, 1.1, 0.9]}
response = requests.get("http://localhost:8000/iris", json=sample_request_input)
print(response.text)
# Result:
# {
#  "result": "versicolor"
# }

More Information

Older documents:

Getting Involved

  • Forum: For discussions about development, questions about usage, and feature requests.

  • GitHub Issues: For reporting bugs.

  • Twitter: Follow updates on Twitter.

  • Slack: Join our Slack channel.

  • Meetup Group: Join our meetup group.

  • StackOverflow: For questions about how to use Ray.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

ray_cpp-1.12.1-cp39-cp39-win_amd64.whl (18.1 MB view details)

Uploaded CPython 3.9Windows x86-64

ray_cpp-1.12.1-cp39-cp39-manylinux2014_x86_64.whl (30.9 MB view details)

Uploaded CPython 3.9

ray_cpp-1.12.1-cp39-cp39-macosx_12_0_arm64.whl (26.7 MB view details)

Uploaded CPython 3.9macOS 12.0+ ARM64

ray_cpp-1.12.1-cp39-cp39-macosx_10_15_x86_64.whl (28.9 MB view details)

Uploaded CPython 3.9macOS 10.15+ x86-64

ray_cpp-1.12.1-cp38-cp38-win_amd64.whl (18.1 MB view details)

Uploaded CPython 3.8Windows x86-64

ray_cpp-1.12.1-cp38-cp38-manylinux2014_x86_64.whl (30.9 MB view details)

Uploaded CPython 3.8

ray_cpp-1.12.1-cp38-cp38-macosx_12_0_arm64.whl (26.7 MB view details)

Uploaded CPython 3.8macOS 12.0+ ARM64

ray_cpp-1.12.1-cp38-cp38-macosx_10_15_x86_64.whl (28.9 MB view details)

Uploaded CPython 3.8macOS 10.15+ x86-64

ray_cpp-1.12.1-cp37-cp37m-win_amd64.whl (18.1 MB view details)

Uploaded CPython 3.7mWindows x86-64

ray_cpp-1.12.1-cp37-cp37m-manylinux2014_x86_64.whl (30.9 MB view details)

Uploaded CPython 3.7m

ray_cpp-1.12.1-cp37-cp37m-macosx_10_15_intel.whl (28.9 MB view details)

Uploaded CPython 3.7mmacOS 10.15+ Intel (x86-64, i386)

ray_cpp-1.12.1-cp36-cp36m-win_amd64.whl (18.4 MB view details)

Uploaded CPython 3.6mWindows x86-64

ray_cpp-1.12.1-cp36-cp36m-manylinux2014_x86_64.whl (30.9 MB view details)

Uploaded CPython 3.6m

ray_cpp-1.12.1-cp36-cp36m-macosx_10_15_intel.whl (28.9 MB view details)

Uploaded CPython 3.6mmacOS 10.15+ Intel (x86-64, i386)

File details

Details for the file ray_cpp-1.12.1-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: ray_cpp-1.12.1-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 18.1 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.6

File hashes

Hashes for ray_cpp-1.12.1-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 4353e9071e5d37922eecd844079e85e57f11f80410017a7f6e6a1bc20fce5180
MD5 454a66e9d0c24354e2085db4e970eb17
BLAKE2b-256 c357aa04c331d0c97007d31027f80d1b99e034b6737ea82d639c85c49dded0dc

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.1-cp39-cp39-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.1-cp39-cp39-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 0f53ad69c19ba190b27bddfe4839a82551d4f6f019b527dda201397217bf97ee
MD5 6b68876bf8f3c5c48229f1251aec6b4e
BLAKE2b-256 788c4371c83cbdd70141e43c0fefcdf9256f76ab20847c3f3439053a4a4b6d6f

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.1-cp39-cp39-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.1-cp39-cp39-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 64cbfe1a0ebdbcc9629abd4965bdcfc73c49e27896c274e21b52fd1ac1472dac
MD5 51ba746b2a73a57bd95fe555c1e1bbc4
BLAKE2b-256 49d94df73c1bb6c8738e4515559ddbf5829c37ba2812b147b283a24a79a8037e

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.1-cp39-cp39-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.1-cp39-cp39-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 0bfbfd323e8b7279fea6b716c59a17202589057944fdddd66d128a3eb54247da
MD5 502d120a61e3476d50a360fee4590a52
BLAKE2b-256 6a13b6079ef843a924a3d1f82244fefc4a79ee5d70c80ec6aa9591b43f8628b5

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.1-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: ray_cpp-1.12.1-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 18.1 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.6

File hashes

Hashes for ray_cpp-1.12.1-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 bbbd330f75dbe483e93822256a4798cabfa6b64c323ce79f4658291b3a03f750
MD5 5a6de38c67b530050abf56316c37ee50
BLAKE2b-256 66bc93fae542a8bd1b3738190efd29c24e1b80464aef17a7c0d17f2fa8244a21

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.1-cp38-cp38-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.1-cp38-cp38-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ab315a00ec6b32fc8695cc2b275ac4acc3df77cef8063a44fd20044074a2c312
MD5 12096f1fa8de52bcb091fd0786e64df6
BLAKE2b-256 690b84da16caa2ce0e3555c4f14b8957e0930898f403664190c785eaf925ccc2

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.1-cp38-cp38-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.1-cp38-cp38-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 f6a6d4bbe09a522cdf2d84e9889256e3785f39388939c300e1c23f76a0c5329d
MD5 3b763501286ca3ff2f85c6959f212e10
BLAKE2b-256 1d20d1a529e27abfff8cade292e29abb4bcbebbb1c0680530af119e10e47372f

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.1-cp38-cp38-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.1-cp38-cp38-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 e9f82b76d8bf9c47789d3b8a1c9b01f813475ac150d1ce6c4850452e07f96a54
MD5 80beb7e57ecd66794d68547d24dad746
BLAKE2b-256 60dd78ed35a00e7d92ec2ca9d11679df9868cf92ca3b9f6ae6a939953ae11718

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.1-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: ray_cpp-1.12.1-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 18.1 MB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.6

File hashes

Hashes for ray_cpp-1.12.1-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 890e5fce2d6c49d8b8b743a30ca1b69a5161b81275a4b1ca6d525e7723431d94
MD5 797e5ccd54175bea50f71fb75f99f8b7
BLAKE2b-256 faa209e97faebddf818d3b31c5fa1692ea88b57996549a69d0a4b8af342c9191

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.1-cp37-cp37m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.1-cp37-cp37m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 fd3e6d181113f00a2cf9b6302031d62bfc49025740b24445b10b75783da61b3f
MD5 0a1911b53b8f030aa3fd90c65c757fce
BLAKE2b-256 70a28dbbd4ec448e858f046e9069e71c1bcce9c7192d9483a285169d342c7651

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.1-cp37-cp37m-macosx_10_15_intel.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.1-cp37-cp37m-macosx_10_15_intel.whl
Algorithm Hash digest
SHA256 7d11742d16c62e021610f0df294ebfc5b81851a10a5e27029698a8ee414aaac1
MD5 b88566e0a011b0d44b0450478267a499
BLAKE2b-256 3f57eb70405c79b9b2fe1524efdc8a9c82edf8c2acbc87347f27db76ca55d837

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.1-cp36-cp36m-win_amd64.whl.

File metadata

  • Download URL: ray_cpp-1.12.1-cp36-cp36m-win_amd64.whl
  • Upload date:
  • Size: 18.4 MB
  • Tags: CPython 3.6m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.6

File hashes

Hashes for ray_cpp-1.12.1-cp36-cp36m-win_amd64.whl
Algorithm Hash digest
SHA256 01d377ca3c472e35e2300e61581416fb05f0e628aaa2f11db9c1de6887dcd0ef
MD5 40d3340f900bfa4cc84331da603a65d8
BLAKE2b-256 8d064f939b8f3b76ebd17edc2ee61edea84eaaa12246faebd9b1dd4f23c1ca2a

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.1-cp36-cp36m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.1-cp36-cp36m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7f9a4fbcca5e75907da0ab93604bdfaadfc5d736b53a4b8088f94c6cacd2a09b
MD5 6facd7da1367fe6ef2e85e83bdcbf67a
BLAKE2b-256 a2c9f1d63b31df0560ed981f4e2e2ebee404282e8159d262475703b1c2e35e10

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.1-cp36-cp36m-macosx_10_15_intel.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.1-cp36-cp36m-macosx_10_15_intel.whl
Algorithm Hash digest
SHA256 c55d0fecdedfa4ca19de83101c7c66bc7179d64b65c93488c1521a6c9ce23aa6
MD5 057d9b2306dc21f83293fb4756b3ea14
BLAKE2b-256 20588050d6ed0cc8fc35476ad7aca4ec5516e990334778b253ae28494becf1cc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page