Skip to main content

A subpackage of Ray which provides the Ray C++ API.

Project description

https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png https://readthedocs.org/projects/ray/badge/?version=master https://img.shields.io/badge/Ray-Join%20Slack-blue https://img.shields.io/badge/Discuss-Ask%20Questions-blue https://img.shields.io/twitter/follow/raydistributed.svg?style=social&logo=twitter

Ray provides a simple, universal API for building distributed applications.

Ray is packaged with the following libraries for accelerating machine learning workloads:

  • Tune: Scalable Hyperparameter Tuning

  • RLlib: Scalable Reinforcement Learning

  • Train: Distributed Deep Learning (beta)

  • Datasets: Distributed Data Loading and Compute

As well as libraries for taking ML and distributed apps to production:

  • Serve: Scalable and Programmable Serving

  • Workflows: Fast, Durable Application Flows (alpha)

There are also many community integrations with Ray, including Dask, MARS, Modin, Horovod, Hugging Face, Scikit-learn, and others. Check out the full list of Ray distributed libraries here.

Install Ray with: pip install ray. For nightly wheels, see the Installation page.

Quick Start

Execute Python functions in parallel.

import ray
ray.init()

@ray.remote
def f(x):
    return x * x

futures = [f.remote(i) for i in range(4)]
print(ray.get(futures))

To use Ray’s actor model:

import ray
ray.init()

@ray.remote
class Counter(object):
    def __init__(self):
        self.n = 0

    def increment(self):
        self.n += 1

    def read(self):
        return self.n

counters = [Counter.remote() for i in range(4)]
[c.increment.remote() for c in counters]
futures = [c.read.remote() for c in counters]
print(ray.get(futures))

Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download this configuration file, and run:

ray submit [CLUSTER.YAML] example.py --start

Read more about launching clusters.

Tune Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/images/tune-wide.png

Tune is a library for hyperparameter tuning at any scale.

To run this example, you will need to install the following:

$ pip install "ray[tune]"

This example runs a parallel grid search to optimize an example objective function.

from ray import tune


def objective(step, alpha, beta):
    return (0.1 + alpha * step / 100)**(-1) + beta * 0.1


def training_function(config):
    # Hyperparameters
    alpha, beta = config["alpha"], config["beta"]
    for step in range(10):
        # Iterative training function - can be any arbitrary training procedure.
        intermediate_score = objective(step, alpha, beta)
        # Feed the score back back to Tune.
        tune.report(mean_loss=intermediate_score)


analysis = tune.run(
    training_function,
    config={
        "alpha": tune.grid_search([0.001, 0.01, 0.1]),
        "beta": tune.choice([1, 2, 3])
    })

print("Best config: ", analysis.get_best_config(metric="mean_loss", mode="min"))

# Get a dataframe for analyzing trial results.
df = analysis.results_df

If TensorBoard is installed, automatically visualize all trial results:

tensorboard --logdir ~/ray_results

RLlib Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/rllib/images/rllib-logo.png

RLlib is an industry-grade library for reinforcement learning (RL), built on top of Ray. It offers high scalability and unified APIs for a variety of industry- and research applications.

$ pip install "ray[rllib]" tensorflow  # or torch
import gym
from ray.rllib.agents.ppo import PPOTrainer


# Define your problem using python and openAI's gym API:
class SimpleCorridor(gym.Env):
    """Corridor in which an agent must learn to move right to reach the exit.

    ---------------------
    | S | 1 | 2 | 3 | G |   S=start; G=goal; corridor_length=5
    ---------------------

    Possible actions to chose from are: 0=left; 1=right
    Observations are floats indicating the current field index, e.g. 0.0 for
    starting position, 1.0 for the field next to the starting position, etc..
    Rewards are -0.1 for all steps, except when reaching the goal (+1.0).
    """

    def __init__(self, config):
        self.end_pos = config["corridor_length"]
        self.cur_pos = 0
        self.action_space = gym.spaces.Discrete(2)  # left and right
        self.observation_space = gym.spaces.Box(0.0, self.end_pos, shape=(1,))

    def reset(self):
        """Resets the episode and returns the initial observation of the new one.
        """
        self.cur_pos = 0
        # Return initial observation.
        return [self.cur_pos]

    def step(self, action):
        """Takes a single step in the episode given `action`

        Returns:
            New observation, reward, done-flag, info-dict (empty).
        """
        # Walk left.
        if action == 0 and self.cur_pos > 0:
            self.cur_pos -= 1
        # Walk right.
        elif action == 1:
            self.cur_pos += 1
        # Set `done` flag when end of corridor (goal) reached.
        done = self.cur_pos >= self.end_pos
        # +1 when goal reached, otherwise -1.
        reward = 1.0 if done else -0.1
        return [self.cur_pos], reward, done, {}


# Create an RLlib Trainer instance.
trainer = PPOTrainer(
    config={
        # Env class to use (here: our gym.Env sub-class from above).
        "env": SimpleCorridor,
        # Config dict to be passed to our custom env's constructor.
        "env_config": {
            # Use corridor with 20 fields (including S and G).
            "corridor_length": 20
        },
        # Parallelize environment rollouts.
        "num_workers": 3,
    })

# Train for n iterations and report results (mean episode rewards).
# Since we have to move at least 19 times in the env to reach the goal and
# each move gives us -0.1 reward (except the last move at the end: +1.0),
# we can expect to reach an optimal episode reward of -0.1*18 + 1.0 = -0.8
for i in range(5):
    results = trainer.train()
    print(f"Iter: {i}; avg. reward={results['episode_reward_mean']}")

After training, you may want to perform action computations (inference) in your environment. Here is a minimal example on how to do this. Also check out our more detailed examples here (in particular for normal models, LSTMs, and attention nets).

# Perform inference (action computations) based on given env observations.
# Note that we are using a slightly different env here (len 10 instead of 20),
# however, this should still work as the agent has (hopefully) learned
# to "just always walk right!"
env = SimpleCorridor({"corridor_length": 10})
# Get the initial observation (should be: [0.0] for the starting position).
obs = env.reset()
done = False
total_reward = 0.0
# Play one episode.
while not done:
    # Compute a single action, given the current observation
    # from the environment.
    action = trainer.compute_single_action(obs)
    # Apply the computed action in the environment.
    obs, reward, done, info = env.step(action)
    # Sum up rewards for reporting purposes.
    total_reward += reward
# Report results.
print(f"Played 1 episode; total-reward={total_reward}")

Ray Serve Quick Start

https://raw.githubusercontent.com/ray-project/ray/master/doc/source/serve/logo.svg

Ray Serve is a scalable model-serving library built on Ray. It is:

  • Framework Agnostic: Use the same toolkit to serve everything from deep learning models built with frameworks like PyTorch or Tensorflow & Keras to Scikit-Learn models or arbitrary business logic.

  • Python First: Configure your model serving declaratively in pure Python, without needing YAMLs or JSON configs.

  • Performance Oriented: Turn on batching, pipelining, and GPU acceleration to increase the throughput of your model.

  • Composition Native: Allow you to create “model pipelines” by composing multiple models together to drive a single prediction.

  • Horizontally Scalable: Serve can linearly scale as you add more machines. Enable your ML-powered service to handle growing traffic.

To run this example, you will need to install the following:

$ pip install scikit-learn
$ pip install "ray[serve]"

This example runs serves a scikit-learn gradient boosting classifier.

import pickle
import requests

from sklearn.datasets import load_iris
from sklearn.ensemble import GradientBoostingClassifier

from ray import serve

serve.start()

# Train model.
iris_dataset = load_iris()
model = GradientBoostingClassifier()
model.fit(iris_dataset["data"], iris_dataset["target"])

@serve.deployment(route_prefix="/iris")
class BoostingModel:
    def __init__(self, model):
        self.model = model
        self.label_list = iris_dataset["target_names"].tolist()

    async def __call__(self, request):
        payload = (await request.json())["vector"]
        print(f"Received flask request with data {payload}")

        prediction = self.model.predict([payload])[0]
        human_name = self.label_list[prediction]
        return {"result": human_name}


# Deploy model.
BoostingModel.deploy(model)

# Query it!
sample_request_input = {"vector": [1.2, 1.0, 1.1, 0.9]}
response = requests.get("http://localhost:8000/iris", json=sample_request_input)
print(response.text)
# Result:
# {
#  "result": "versicolor"
# }

More Information

Older documents:

Getting Involved

Platform

Purpose

Estimated Response Time

Support Level

Discourse Forum

For discussions about development and questions about usage.

< 1 day

Community

GitHub Issues

For reporting bugs and filing feature requests.

< 2 days

Ray OSS Team

Slack

For collaborating with other Ray users.

< 2 days

Community

StackOverflow

For asking questions about how to use Ray.

3-5 days

Community

Meetup Group

For learning about Ray projects and best practices.

Monthly

Ray DevRel

Twitter

For staying up-to-date on new features.

Daily

Ray DevRel

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

ray_cpp-1.13.0-cp310-cp310-manylinux2014_x86_64.whl (31.7 MB view details)

Uploaded CPython 3.10

ray_cpp-1.13.0-cp310-cp310-macosx_12_0_arm64.whl (27.1 MB view details)

Uploaded CPython 3.10macOS 12.0+ ARM64

ray_cpp-1.13.0-cp310-cp310-macosx_10_15_universal2.whl (29.4 MB view details)

Uploaded CPython 3.10macOS 10.15+ universal2 (ARM64, x86-64)

ray_cpp-1.13.0-cp39-cp39-win_amd64.whl (18.4 MB view details)

Uploaded CPython 3.9Windows x86-64

ray_cpp-1.13.0-cp39-cp39-manylinux2014_x86_64.whl (31.7 MB view details)

Uploaded CPython 3.9

ray_cpp-1.13.0-cp39-cp39-macosx_12_0_arm64.whl (27.1 MB view details)

Uploaded CPython 3.9macOS 12.0+ ARM64

ray_cpp-1.13.0-cp39-cp39-macosx_10_15_x86_64.whl (29.4 MB view details)

Uploaded CPython 3.9macOS 10.15+ x86-64

ray_cpp-1.13.0-cp38-cp38-win_amd64.whl (18.4 MB view details)

Uploaded CPython 3.8Windows x86-64

ray_cpp-1.13.0-cp38-cp38-manylinux2014_x86_64.whl (31.7 MB view details)

Uploaded CPython 3.8

ray_cpp-1.13.0-cp38-cp38-macosx_12_0_arm64.whl (27.1 MB view details)

Uploaded CPython 3.8macOS 12.0+ ARM64

ray_cpp-1.13.0-cp38-cp38-macosx_10_15_x86_64.whl (29.4 MB view details)

Uploaded CPython 3.8macOS 10.15+ x86-64

ray_cpp-1.13.0-cp37-cp37m-win_amd64.whl (18.4 MB view details)

Uploaded CPython 3.7mWindows x86-64

ray_cpp-1.13.0-cp37-cp37m-manylinux2014_x86_64.whl (31.7 MB view details)

Uploaded CPython 3.7m

ray_cpp-1.13.0-cp37-cp37m-macosx_10_15_intel.whl (29.4 MB view details)

Uploaded CPython 3.7mmacOS 10.15+ Intel (x86-64, i386)

ray_cpp-1.13.0-cp36-cp36m-manylinux2014_x86_64.whl (31.7 MB view details)

Uploaded CPython 3.6m

ray_cpp-1.13.0-cp36-cp36m-macosx_10_15_intel.whl (29.4 MB view details)

Uploaded CPython 3.6mmacOS 10.15+ Intel (x86-64, i386)

File details

Details for the file ray_cpp-1.13.0-cp310-cp310-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.13.0-cp310-cp310-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 3a05f89b519d01c50c14fa5e75ff0890ccea8515855a42e344edbbbf04556dc1
MD5 11a2e51d8e8356ee94d6d77461f54289
BLAKE2b-256 aaa58ef54f9496b33f6943ba48a35368eb9eea3495cd42d08208ae00db4085e0

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp310-cp310-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.13.0-cp310-cp310-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 50401e345a644583e85183048e74d9bcf986ff6363554ec1c7b68745a7063939
MD5 123c7f79efc4c1327fa660fb79ead0cc
BLAKE2b-256 d0ac4a3ea38f01b5834814b7b68412f2b66dca5b0c76e7a6e5b2f4cb72fc1cd2

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp310-cp310-macosx_10_15_universal2.whl.

File metadata

File hashes

Hashes for ray_cpp-1.13.0-cp310-cp310-macosx_10_15_universal2.whl
Algorithm Hash digest
SHA256 f08e6838e1cf5dee67fa0ab2561c94d4a41ec889622c501879f57a851fee0c1c
MD5 5c1cfdb95ee9ef4e8275e83cae8eafa4
BLAKE2b-256 a9e902e586a16825d53325794b430efd537a80715843b006cfbc6d3b23b71bf9

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: ray_cpp-1.13.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 18.4 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.10

File hashes

Hashes for ray_cpp-1.13.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 9a3514a55a1e0e5b36404e993dabc696e92a880ad1aa98384f9ea6e90b2c9da2
MD5 846c16dd9ae4dddf8a8feec6108b58bb
BLAKE2b-256 50d0d5f245ad38b0f6f4b6f6d18a74140f55fd8f51dbb9bafae9f899bc03c8fa

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp39-cp39-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.13.0-cp39-cp39-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ec35dbacc758297a2c0603b6fa52f3983982198c5cee3fc8c95b8ec60808550a
MD5 bb42e762c3eef29fffe12bf68d37d2ab
BLAKE2b-256 5b0d830f9fd7297bbb1cc14af96ab8f0962c35c7962f8f932daa132ac374b7b9

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp39-cp39-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.13.0-cp39-cp39-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 97cbc45b08b5a94b1923c4c5e0d4500136fe765bda6474a835c6526ce77e3bc2
MD5 8e1ceb1c7b8d3d4c24a176a2a403e79a
BLAKE2b-256 5fc5688f5689612aa2576412783a1427bd3b1e2908208828375a0b10217b4e0d

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp39-cp39-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.13.0-cp39-cp39-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 d9157b0ddd686668dfc56058d856c3d8174ed9d02a04781090058ec3582eeefc
MD5 6c8fbf678e3e742b83211137350febf2
BLAKE2b-256 490eee636e827a56fa5da170bf86e8b1ec7dd27e3134cbf21e0d9336bfb848dd

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: ray_cpp-1.13.0-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 18.4 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.10

File hashes

Hashes for ray_cpp-1.13.0-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 ee3041b9e1e1379238e725fd121be84a1f829760f711989facc578681dadd705
MD5 f8b2481d0238e2e17cef3920b4950bef
BLAKE2b-256 eeaf74590eb7209f8dae4a9c5c2fb1d61e19ac6f8bd5032c362cddc72570e561

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp38-cp38-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.13.0-cp38-cp38-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f12a2394e76690020b4db5f5e136f40fc2e40f5a18856b854d38f8d1c83470e6
MD5 ad7b3817af02963f988ab3b73a2be1a1
BLAKE2b-256 c2ff3e3e499bc6fcbd769e37e717f41c9600cdde69d0479922a55b77c218ea35

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp38-cp38-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.13.0-cp38-cp38-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 d9bc3d2e969aafef4bc004ec45195d4ef493fb6c52c99c76efdbc3ef78d9c429
MD5 eb0df344b8dc85d5509690b011d2773a
BLAKE2b-256 db833aaa8742f67f8486e884db15603be657531c9e25fc55bf2e5f6e2a39b287

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp38-cp38-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.13.0-cp38-cp38-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 9204345c89998667cdde899f69d7d357fa8850f57a219137a46fbae2cfdbd8c0
MD5 de8fb2f92c7ada2b40adade03d271512
BLAKE2b-256 83e67da3fdc0ea810d2fe4848c5ca2215516963275cb5c5f3d785e9d2e385091

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: ray_cpp-1.13.0-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 18.4 MB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.10

File hashes

Hashes for ray_cpp-1.13.0-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 ae1c830f6ba26fdbf061edbec047d470bebe8a3f2d037e4ba98cf56dd7abf9de
MD5 f6ec4eba48534641a1bb7023663695c4
BLAKE2b-256 5a30a691e343a1a0f71bf5337ed7c5be62276e667d3cac589e98c2fdab2aa64c

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp37-cp37m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.13.0-cp37-cp37m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e97e6b201a4a69f83d8b4e401bce5e2e55dc31d98201281cb2b111d0db1d790f
MD5 6d7aba2b42ebd41eebf4fa29ae407a2a
BLAKE2b-256 615d0fdfa08e06ab0df69c19ccc9ca15e941c4060ddedc2f42bd9d29ad971804

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp37-cp37m-macosx_10_15_intel.whl.

File metadata

File hashes

Hashes for ray_cpp-1.13.0-cp37-cp37m-macosx_10_15_intel.whl
Algorithm Hash digest
SHA256 3bc1225a9e10999df9cd6e95879eb3c0064600cba088c65d17ecb01fa26d2f4d
MD5 b001fea2517bef1b3a4d11a2e0195b4d
BLAKE2b-256 aef4c4b00b4e0b2b96bca3a658740b74747adbad4129a7a496bfb0dc713b1cc8

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp36-cp36m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.13.0-cp36-cp36m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 0a0e9514f300440b7dcb45994f1b75573d07e2f5e25f5a8da1344f507a75cf4b
MD5 8e1de5290b2aba9d54e80a06f8aaad40
BLAKE2b-256 672d11099b953ab8c310f8c1e8a786bd09a53d58fd6cc9d6ac2679b13815a57f

See more details on using hashes here.

File details

Details for the file ray_cpp-1.13.0-cp36-cp36m-macosx_10_15_intel.whl.

File metadata

File hashes

Hashes for ray_cpp-1.13.0-cp36-cp36m-macosx_10_15_intel.whl
Algorithm Hash digest
SHA256 79a2b4258780443fc729026b72f3036abbdef09dce2a8b2dddaad1a58574479c
MD5 fb33475817e8257413d4f3d0dffa239a
BLAKE2b-256 1d0475f137397588590fa03cf71fa74fc7c2205cfda9bafd69378ac3ae917f9a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page