Skip to main content

A modular, primitive-first, python-first PyTorch library for Reinforcement Learning

Project description

Unit-tests Documentation Benchmarks codecov Twitter Follow Python version GitHub license pypi version pypi nightly version Downloads Downloads Discord Shield

TorchRL

Documentation | TensorDict | Features | Examples, tutorials and demos | Citation | Installation | Asking a question | Contributing

TorchRL is an open-source Reinforcement Learning (RL) library for PyTorch.

🚀 What's New

LLM API - Complete Framework for Language Model Fine-tuning

TorchRL now includes a comprehensive LLM API for post-training and fine-tuning of language models! This new framework provides everything you need for RLHF, supervised fine-tuning, and tool-augmented training:

  • 🤖 Unified LLM Wrappers: Seamless integration with Hugging Face models and vLLM inference engines - more to come!
  • 💬 Conversation Management: Advanced History class for multi-turn dialogue with automatic chat template detection
  • 🛠️ Tool Integration: Built-in support for Python code execution, function calling, and custom tool transforms
  • 🎯 Specialized Objectives: GRPO (Group Relative Policy Optimization) and SFT loss functions optimized for language models
  • High-Performance Collectors: Async data collection with distributed training support
  • 🔄 Flexible Environments: Transform-based architecture for reward computation, data loading, and conversation augmentation

The LLM API follows TorchRL's modular design principles, allowing you to mix and match components for your specific use case. Check out the complete documentation and GRPO implementation example to get started!

Quick LLM API Example
from torchrl.envs.llm import ChatEnv
from torchrl.modules.llm import TransformersWrapper
from torchrl.objectives.llm import GRPOLoss
from torchrl.collectors.llm import LLMCollector

# Create environment with Python tool execution
env = ChatEnv(
    tokenizer=tokenizer,
    system_prompt="You are an assistant that can execute Python code.",
    batch_size=[1]
).append_transform(PythonInterpreter())

# Wrap your language model
llm = TransformersWrapper(
    model=model,
    tokenizer=tokenizer,
    input_mode="history"
)

# Set up GRPO training
loss_fn = GRPOLoss(llm, critic, gamma=0.99)
collector = LLMCollector(env, llm, frames_per_batch=100)

# Training loop
for data in collector:
    loss = loss_fn(data)
    loss.backward()
    optimizer.step()

Key features

  • 🐍 Python-first: Designed with Python as the primary language for ease of use and flexibility
  • ⏱️ Efficient: Optimized for performance to support demanding RL research applications
  • 🧮 Modular, customizable, extensible: Highly modular architecture allows for easy swapping, transformation, or creation of new components
  • 📚 Documented: Thorough documentation ensures that users can quickly understand and utilize the library
  • Tested: Rigorously tested to ensure reliability and stability
  • ⚙️ Reusable functionals: Provides a set of highly reusable functions for cost functions, returns, and data processing

Design Principles

  • 🔥 Aligns with PyTorch ecosystem: Follows the structure and conventions of popular PyTorch libraries (e.g., dataset pillar, transforms, models, data utilities)
  • ➖ Minimal dependencies: Only requires Python standard library, NumPy, and PyTorch; optional dependencies for common environment libraries (e.g., OpenAI Gym) and datasets (D4RL, OpenX...)

Read the full paper for a more curated description of the library.

Getting started

Check our Getting Started tutorials for quickly ramp up with the basic features of the library!

Documentation and knowledge base

The TorchRL documentation can be found here. It contains tutorials and the API reference.

TorchRL also provides a RL knowledge base to help you debug your code, or simply learn the basics of RL. Check it out here.

We have some introductory videos for you to get to know the library better, check them out:

Spotlight publications

TorchRL being domain-agnostic, you can use it across many different fields. Here are a few examples:

  • ACEGEN: Reinforcement Learning of Generative Chemical Agents for Drug Discovery
  • BenchMARL: Benchmarking Multi-Agent Reinforcement Learning
  • BricksRL: A Platform for Democratizing Robotics and Reinforcement Learning Research and Education with LEGO
  • OmniDrones: An Efficient and Flexible Platform for Reinforcement Learning in Drone Control
  • RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization Benchmark
  • Robohive: A unified framework for robot learning

Writing simplified and portable RL codebase with TensorDict

RL algorithms are very heterogeneous, and it can be hard to recycle a codebase across settings (e.g. from online to offline, from state-based to pixel-based learning). TorchRL solves this problem through TensorDict, a convenient data structure(1) that can be used to streamline one's RL codebase. With this tool, one can write a complete PPO training script in less than 100 lines of code!

Code
import torch
from tensordict.nn import TensorDictModule
from tensordict.nn.distributions import NormalParamExtractor
from torch import nn

from torchrl.collectors import SyncDataCollector
from torchrl.data.replay_buffers import TensorDictReplayBuffer, \
  LazyTensorStorage, SamplerWithoutReplacement
from torchrl.envs.libs.gym import GymEnv
from torchrl.modules import ProbabilisticActor, ValueOperator, TanhNormal
from torchrl.objectives import ClipPPOLoss
from torchrl.objectives.value import GAE

env = GymEnv("Pendulum-v1") 
model = TensorDictModule(
  nn.Sequential(
      nn.Linear(3, 128), nn.Tanh(),
      nn.Linear(128, 128), nn.Tanh(),
      nn.Linear(128, 128), nn.Tanh(),
      nn.Linear(128, 2),
      NormalParamExtractor()
  ),
  in_keys=["observation"],
  out_keys=["loc", "scale"]
)
critic = ValueOperator(
  nn.Sequential(
      nn.Linear(3, 128), nn.Tanh(),
      nn.Linear(128, 128), nn.Tanh(),
      nn.Linear(128, 128), nn.Tanh(),
      nn.Linear(128, 1),
  ),
  in_keys=["observation"],
)
actor = ProbabilisticActor(
  model,
  in_keys=["loc", "scale"],
  distribution_class=TanhNormal,
  distribution_kwargs={"low": -1.0, "high": 1.0},
  return_log_prob=True
  )
buffer = TensorDictReplayBuffer(
  storage=LazyTensorStorage(1000),
  sampler=SamplerWithoutReplacement(),
  batch_size=50,
  )
collector = SyncDataCollector(
  env,
  actor,
  frames_per_batch=1000,
  total_frames=1_000_000,
)
loss_fn = ClipPPOLoss(actor, critic)
adv_fn = GAE(value_network=critic, average_gae=True, gamma=0.99, lmbda=0.95)
optim = torch.optim.Adam(loss_fn.parameters(), lr=2e-4)

for data in collector:  # collect data
  for epoch in range(10):
      adv_fn(data)  # compute advantage
      buffer.extend(data)
      for sample in buffer:  # consume data
          loss_vals = loss_fn(sample)
          loss_val = sum(
              value for key, value in loss_vals.items() if
              key.startswith("loss")
              )
          loss_val.backward()
          optim.step()
          optim.zero_grad()
  print(f"avg reward: {data['next', 'reward'].mean().item(): 4.4f}")

Here is an example of how the environment API relies on tensordict to carry data from one function to another during a rollout execution: Alt Text

TensorDict makes it easy to re-use pieces of code across environments, models and algorithms.

Code

For instance, here's how to code a rollout in TorchRL:

- obs, done = env.reset()
+ tensordict = env.reset()
policy = SafeModule(
    model,
    in_keys=["observation_pixels", "observation_vector"],
    out_keys=["action"],
)
out = []
for i in range(n_steps):
-     action, log_prob = policy(obs)
-     next_obs, reward, done, info = env.step(action)
-     out.append((obs, next_obs, action, log_prob, reward, done))
-     obs = next_obs
+     tensordict = policy(tensordict)
+     tensordict = env.step(tensordict)
+     out.append(tensordict)
+     tensordict = step_mdp(tensordict)  # renames next_observation_* keys to observation_*
- obs, next_obs, action, log_prob, reward, done = [torch.stack(vals, 0) for vals in zip(*out)]
+ out = torch.stack(out, 0)  # TensorDict supports multiple tensor operations

Using this, TorchRL abstracts away the input / output signatures of the modules, env, collectors, replay buffers and losses of the library, allowing all primitives to be easily recycled across settings.

Code

Here's another example of an off-policy training loop in TorchRL (assuming that a data collector, a replay buffer, a loss and an optimizer have been instantiated):

- for i, (obs, next_obs, action, hidden_state, reward, done) in enumerate(collector):
+ for i, tensordict in enumerate(collector):
-     replay_buffer.add((obs, next_obs, action, log_prob, reward, done))
+     replay_buffer.add(tensordict)
    for j in range(num_optim_steps):
-         obs, next_obs, action, hidden_state, reward, done = replay_buffer.sample(batch_size)
-         loss = loss_fn(obs, next_obs, action, hidden_state, reward, done)
+         tensordict = replay_buffer.sample(batch_size)
+         loss = loss_fn(tensordict)
        loss.backward()
        optim.step()
        optim.zero_grad()

This training loop can be re-used across algorithms as it makes a minimal number of assumptions about the structure of the data.

TensorDict supports multiple tensor operations on its device and shape (the shape of TensorDict, or its batch size, is the common arbitrary N first dimensions of all its contained tensors):

Code
# stack and cat
tensordict = torch.stack(list_of_tensordicts, 0)
tensordict = torch.cat(list_of_tensordicts, 0)
# reshape
tensordict = tensordict.view(-1)
tensordict = tensordict.permute(0, 2, 1)
tensordict = tensordict.unsqueeze(-1)
tensordict = tensordict.squeeze(-1)
# indexing
tensordict = tensordict[:2]
tensordict[:, 2] = sub_tensordict
# device and memory location
tensordict.cuda()
tensordict.to("cuda:1")
tensordict.share_memory_()

TensorDict comes with a dedicated tensordict.nn module that contains everything you might need to write your model with it. And it is functorch and torch.compile compatible!

Code
transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)
+ td_module = SafeModule(transformer_model, in_keys=["src", "tgt"], out_keys=["out"])
src = torch.rand((10, 32, 512))
tgt = torch.rand((20, 32, 512))
+ tensordict = TensorDict({"src": src, "tgt": tgt}, batch_size=[20, 32])
- out = transformer_model(src, tgt)
+ td_module(tensordict)
+ out = tensordict["out"]

The TensorDictSequential class allows to branch sequences of nn.Module instances in a highly modular way. For instance, here is an implementation of a transformer using the encoder and decoder blocks:

encoder_module = TransformerEncoder(...)
encoder = TensorDictSequential(encoder_module, in_keys=["src", "src_mask"], out_keys=["memory"])
decoder_module = TransformerDecoder(...)
decoder = TensorDictModule(decoder_module, in_keys=["tgt", "memory"], out_keys=["output"])
transformer = TensorDictSequential(encoder, decoder)
assert transformer.in_keys == ["src", "src_mask", "tgt"]
assert transformer.out_keys == ["memory", "output"]

TensorDictSequential allows to isolate subgraphs by querying a set of desired input / output keys:

transformer.select_subsequence(out_keys=["memory"])  # returns the encoder
transformer.select_subsequence(in_keys=["tgt", "memory"])  # returns the decoder

Check TensorDict tutorials to learn more!

Features

  • A common interface for environments which supports common libraries (OpenAI gym, deepmind control lab, etc.)(1) and state-less execution (e.g. Model-based environments). The batched environments containers allow parallel execution(2). A common PyTorch-first class of tensor-specification class is also provided. TorchRL's environments API is simple but stringent and specific. Check the documentation and tutorial to learn more!

    Code
    env_make = lambda: GymEnv("Pendulum-v1", from_pixels=True)
    env_parallel = ParallelEnv(4, env_make)  # creates 4 envs in parallel
    tensordict = env_parallel.rollout(max_steps=20, policy=None)  # random rollout (no policy given)
    assert tensordict.shape == [4, 20]  # 4 envs, 20 steps rollout
    env_parallel.action_spec.is_in(tensordict["action"])  # spec check returns True
    
  • multiprocess and distributed data collectors(2) that work synchronously or asynchronously. Through the use of TensorDict, TorchRL's training loops are made very similar to regular training loops in supervised learning (although the "dataloader" -- read data collector -- is modified on-the-fly):

    Code
    env_make = lambda: GymEnv("Pendulum-v1", from_pixels=True)
    collector = MultiaSyncDataCollector(
        [env_make, env_make],
        policy=policy,
        devices=["cuda:0", "cuda:0"],
        total_frames=10000,
        frames_per_batch=50,
        ...
    )
    for i, tensordict_data in enumerate(collector):
        loss = loss_module(tensordict_data)
        loss.backward()
        optim.step()
        optim.zero_grad()
        collector.update_policy_weights_()
    

    Check our distributed collector examples to learn more about ultra-fast data collection with TorchRL.

  • efficient(2) and generic(1) replay buffers with modularized storage:

    Code
    storage = LazyMemmapStorage(  # memory-mapped (physical) storage
        cfg.buffer_size,
        scratch_dir="/tmp/"
    )
    buffer = TensorDictPrioritizedReplayBuffer(
        alpha=0.7,
        beta=0.5,
        collate_fn=lambda x: x,
        pin_memory=device != torch.device("cpu"),
        prefetch=10,  # multi-threaded sampling
        storage=storage
    )
    

    Replay buffers are also offered as wrappers around common datasets for offline RL:

    Code
    from torchrl.data.replay_buffers import SamplerWithoutReplacement
    from torchrl.data.datasets.d4rl import D4RLExperienceReplay
    data = D4RLExperienceReplay(
        "maze2d-open-v0",
        split_trajs=True,
        batch_size=128,
        sampler=SamplerWithoutReplacement(drop_last=True),
    )
    for sample in data:  # or alternatively sample = data.sample()
        fun(sample)
    
  • cross-library environment transforms(1), executed on device and in a vectorized fashion(2), which process and prepare the data coming out of the environments to be used by the agent:

    Code
    env_make = lambda: GymEnv("Pendulum-v1", from_pixels=True)
    env_base = ParallelEnv(4, env_make, device="cuda:0")  # creates 4 envs in parallel
    env = TransformedEnv(
        env_base,
        Compose(
            ToTensorImage(),
            ObservationNorm(loc=0.5, scale=1.0)),  # executes the transforms once and on device
    )
    tensordict = env.reset()
    assert tensordict.device == torch.device("cuda:0")
    

    Other transforms include: reward scaling (RewardScaling), shape operations (concatenation of tensors, unsqueezing etc.), concatenation of successive operations (CatFrames), resizing (Resize) and many more.

    Unlike other libraries, the transforms are stacked as a list (and not wrapped in each other), which makes it easy to add and remove them at will:

    env.insert_transform(0, NoopResetEnv())  # inserts the NoopResetEnv transform at the index 0
    

    Nevertheless, transforms can access and execute operations on the parent environment:

    transform = env.transform[1]  # gathers the second transform of the list
    parent_env = transform.parent  # returns the base environment of the second transform, i.e. the base env + the first transform
    
  • various tools for distributed learning (e.g. memory mapped tensors)(2);

  • various architectures and models (e.g. actor-critic)(1):

    Code
    # create an nn.Module
    common_module = ConvNet(
        bias_last_layer=True,
        depth=None,
        num_cells=[32, 64, 64],
        kernel_sizes=[8, 4, 3],
        strides=[4, 2, 1],
    )
    # Wrap it in a SafeModule, indicating what key to read in and where to
    # write out the output
    common_module = SafeModule(
        common_module,
        in_keys=["pixels"],
        out_keys=["hidden"],
    )
    # Wrap the policy module in NormalParamsWrapper, such that the output
    # tensor is split in loc and scale, and scale is mapped onto a positive space
    policy_module = SafeModule(
        NormalParamsWrapper(
            MLP(num_cells=[64, 64], out_features=32, activation=nn.ELU)
        ),
        in_keys=["hidden"],
        out_keys=["loc", "scale"],
    )
    # Use a SafeProbabilisticTensorDictSequential to combine the SafeModule with a
    # SafeProbabilisticModule, indicating how to build the
    # torch.distribution.Distribution object and what to do with it
    policy_module = SafeProbabilisticTensorDictSequential(  # stochastic policy
        policy_module,
        SafeProbabilisticModule(
            in_keys=["loc", "scale"],
            out_keys="action",
            distribution_class=TanhNormal,
        ),
    )
    value_module = MLP(
        num_cells=[64, 64],
        out_features=1,
        activation=nn.ELU,
    )
    # Wrap the policy and value funciton in a common module
    actor_value = ActorValueOperator(common_module, policy_module, value_module)
    # standalone policy from this
    standalone_policy = actor_value.get_policy_operator()
    
  • exploration wrappers and modules to easily swap between exploration and exploitation(1):

    Code
    policy_explore = EGreedyWrapper(policy)
    with set_exploration_type(ExplorationType.RANDOM):
        tensordict = policy_explore(tensordict)  # will use eps-greedy
    with set_exploration_type(ExplorationType.DETERMINISTIC):
        tensordict = policy_explore(tensordict)  # will not use eps-greedy
    
  • A series of efficient loss modules and highly vectorized functional return and advantage computation.

    Code

    Loss modules

    from torchrl.objectives import DQNLoss
    loss_module = DQNLoss(value_network=value_network, gamma=0.99)
    tensordict = replay_buffer.sample(batch_size)
    loss = loss_module(tensordict)
    

    Advantage computation

    from torchrl.objectives.value.functional import vec_td_lambda_return_estimate
    advantage = vec_td_lambda_return_estimate(gamma, lmbda, next_state_value, reward, done, terminated)
    
  • a generic trainer class(1) that executes the aforementioned training loop. Through a hooking mechanism, it also supports any logging or data transformation operation at any given time.

  • various recipes to build models that correspond to the environment being deployed.

  • LLM API: Complete framework for language model fine-tuning with unified wrappers for Hugging Face and vLLM backends, conversation management with automatic chat template detection, tool integration (Python execution, function calling), specialized objectives (GRPO, SFT), and high-performance async collectors. Perfect for RLHF, supervised fine-tuning, and tool-augmented training scenarios.

    Code
    from torchrl.envs.llm import ChatEnv
    from torchrl.modules.llm import TransformersWrapper
    from torchrl.envs.llm.transforms import PythonInterpreter
    
    # Create environment with tool execution
    env = ChatEnv(
        tokenizer=tokenizer,
        system_prompt="You can execute Python code.",
        batch_size=[1]
    ).append_transform(PythonInterpreter())
    
    # Wrap language model for training
    llm = TransformersWrapper(
        model=model,
        tokenizer=tokenizer,
        input_mode="history"
    )
    
    # Multi-turn conversation with tool use
    obs = env.reset(TensorDict({"query": "Calculate 2+2"}, batch_size=[1]))
    llm_output = llm(obs)  # Generates response
    obs = env.step(llm_output)  # Environment processes response
    

If you feel a feature is missing from the library, please submit an issue! If you would like to contribute to new features, check our call for contributions and our contribution page.

Examples, tutorials and demos

A series of State-of-the-Art implementations are provided with an illustrative purpose:

Algorithm Compile Support** Tensordict-free API Modular Losses Continuous and Discrete
DQN 1.9x + NA + (through ActionDiscretizer transform)
DDPG 1.87x + + - (continuous only)
IQL 3.22x + + +
CQL 2.68x + + +
TD3 2.27x + + - (continuous only)
TD3+BC untested + + - (continuous only)
A2C 2.67x + - +
PPO 2.42x + - +
SAC 2.62x + - +
REDQ 2.28x + - - (continuous only)
Dreamer v1 untested + + (different classes) - (continuous only)
Decision Transformers untested + NA - (continuous only)
CrossQ untested + + - (continuous only)
Gail untested + NA +
Impala untested + - +
IQL (MARL) untested + + +
DDPG (MARL) untested + + - (continuous only)
PPO (MARL) untested + - +
QMIX-VDN (MARL) untested + NA +
SAC (MARL) untested + - +
RLHF NA + NA NA
LLM API (GRPO) NA + + NA

** The number indicates expected speed-up compared to eager mode when executed on CPU. Numbers may vary depending on architecture and device.

and many more to come!

Code examples displaying toy code snippets and training scripts are also available

Check the examples directory for more details about handling the various configuration settings.

We also provide tutorials and demos that give a sense of what the library can do.

Citation

If you're using TorchRL, please refer to this BibTeX entry to cite this work:

@misc{bou2023torchrl,
      title={TorchRL: A data-driven decision-making library for PyTorch}, 
      author={Albert Bou and Matteo Bettini and Sebastian Dittert and Vikash Kumar and Shagun Sodhani and Xiaomeng Yang and Gianni De Fabritiis and Vincent Moens},
      year={2023},
      eprint={2306.00577},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Installation

Create a new virtual environment:

python -m venv torchrl
source torchrl/bin/activate  # On Windows use: venv\Scripts\activate

Or create a conda environment where the packages will be installed.

conda create --name torchrl python=3.9
conda activate torchrl

Install dependencies:

PyTorch

Depending on the use of torchrl that you want to make, you may want to install the latest (nightly) PyTorch release or the latest stable version of PyTorch. See here for a detailed list of commands, including pip3 or other special installation instructions.

TorchRL offers a few pre-defined dependencies such as "torchrl[tests]", "torchrl[atari]" etc.

Torchrl

You can install the latest stable release by using

pip3 install torchrl

This should work on linux (including AArch64 machines), Windows 10 and OsX (Metal chips only). On certain Windows machines (Windows 11), one should build the library locally. This can be done in two ways:

# Install and build locally v0.8.1 of the library without cloning
pip3 install git+https://github.com/pytorch/rl@v0.8.1
# Clone the library and build it locally
git clone https://github.com/pytorch/tensordict
git clone https://github.com/pytorch/rl
pip install -e tensordict
pip install -e rl

Note that tensordict local build requires cmake to be installed via homebrew (MacOS) or another package manager such as apt, apt-get, conda or yum but NOT pip, as well as pip install "pybind11[global]".

One can also build the wheels to distribute to co-workers using

pip install build
python -m build --wheel

Your wheels will be stored there ./dist/torchrl<name>.whl and installable via

pip install torchrl<name>.whl

The nightly build can be installed via

pip3 install tensordict-nightly torchrl-nightly

which we currently only ship for Linux machines. Importantly, the nightly builds require the nightly builds of PyTorch too. Also, a local build of torchrl with the nightly build of tensordict may fail - install both nightlies or both local builds but do not mix them.

Disclaimer: As of today, TorchRL is roughly compatible with any pytorch version >= 2.1 and installing it will not directly require a newer version of pytorch to be installed. Indirectly though, tensordict still requires the latest PyTorch to be installed and we are working hard to loosen that requirement. The C++ binaries of TorchRL (mainly for prioritized replay buffers) will only work with PyTorch 2.7.0 and above. Some features (e.g., working with nested jagged tensors) may also be limited with older versions of pytorch. It is recommended to use the latest TorchRL with the latest PyTorch version unless there is a strong reason not to do so.

Optional dependencies

The following libraries can be installed depending on the usage one wants to make of torchrl:

# diverse
pip3 install tqdm tensorboard "hydra-core>=1.1" hydra-submitit-launcher

# rendering
pip3 install "moviepy<2.0.0"

# deepmind control suite
pip3 install dm_control

# gym, atari games
pip3 install "gym[atari]" "gym[accept-rom-license]" pygame

# tests
pip3 install pytest pyyaml pytest-instafail

# tensorboard
pip3 install tensorboard

# wandb
pip3 install wandb

Versioning issues can cause error message of the type undefined symbol and such. For these, refer to the versioning issues document for a complete explanation and proposed workarounds.

Asking a question

If you spot a bug in the library, please raise an issue in this repo.

If you have a more generic question regarding RL in PyTorch, post it on the PyTorch forum.

Contributing

Internal collaborations to torchrl are welcome! Feel free to fork, submit issues and PRs. You can checkout the detailed contribution guide here. As mentioned above, a list of open contributions can be found in here.

Contributors are recommended to install pre-commit hooks (using pre-commit install). pre-commit will check for linting related issues when the code is committed locally. You can disable th check by appending -n to your commit command: git commit -m <commit message> -n

Disclaimer

This library is released as a PyTorch beta feature. BC-breaking changes are likely to happen but they will be introduced with a deprecation warranty after a few release cycles.

License

TorchRL is licensed under the MIT License. See LICENSE for details.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

torchrl_nightly-2025.9.1-cp313-cp313-macosx_10_13_universal2.whl (2.0 MB view details)

Uploaded CPython 3.13macOS 10.13+ universal2 (ARM64, x86-64)

torchrl_nightly-2025.9.1-cp312-cp312-macosx_10_13_universal2.whl (2.0 MB view details)

Uploaded CPython 3.12macOS 10.13+ universal2 (ARM64, x86-64)

torchrl_nightly-2025.9.1-cp311-cp311-macosx_10_9_universal2.whl (2.0 MB view details)

Uploaded CPython 3.11macOS 10.9+ universal2 (ARM64, x86-64)

torchrl_nightly-2025.9.1-cp310-cp310-macosx_10_9_universal2.whl (2.0 MB view details)

Uploaded CPython 3.10macOS 10.9+ universal2 (ARM64, x86-64)

File details

Details for the file torchrl_nightly-2025.9.1-cp313-cp313-macosx_10_13_universal2.whl.

File metadata

File hashes

Hashes for torchrl_nightly-2025.9.1-cp313-cp313-macosx_10_13_universal2.whl
Algorithm Hash digest
SHA256 faba60a7d5ebb8a5a3843330a42d2c0ff8d4a9a32e737f2a1d8f0fe918db4dc5
MD5 a9416ad5e54ca9b702828db0cbf98f28
BLAKE2b-256 c044d672de235d32a68a7c886f7dc678c6f8dab4d4b57598501001eede07376f

See more details on using hashes here.

File details

Details for the file torchrl_nightly-2025.9.1-cp312-cp312-macosx_10_13_universal2.whl.

File metadata

File hashes

Hashes for torchrl_nightly-2025.9.1-cp312-cp312-macosx_10_13_universal2.whl
Algorithm Hash digest
SHA256 d48f1d0a0524d88460401d4055a597c52a92dc9a566fcd0d16f98c23f0c1d6bb
MD5 cbc6af54e1e1f863b70355a1cf84991e
BLAKE2b-256 6abe7864e7e7f6d02befd847e6335624863aea8e72a149cf621d1792353b5bcb

See more details on using hashes here.

File details

Details for the file torchrl_nightly-2025.9.1-cp311-cp311-macosx_10_9_universal2.whl.

File metadata

File hashes

Hashes for torchrl_nightly-2025.9.1-cp311-cp311-macosx_10_9_universal2.whl
Algorithm Hash digest
SHA256 d68de2db4ada19e071e5a0638743b915d4e014c870da208bb0f18ad41479c160
MD5 4233c1f466deed687c95a92c5376f81c
BLAKE2b-256 c958ec76be4f61e0cb158532dd96c1a990b759c7f04a29038a5eb37103982be2

See more details on using hashes here.

File details

Details for the file torchrl_nightly-2025.9.1-cp310-cp310-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for torchrl_nightly-2025.9.1-cp310-cp310-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 22ebff133ee7cdc9fc7c692be1a7517b0dc9b2500616a1c24173ec881b64f644
MD5 3e82c953e3e565fa95d466bb45cd654f
BLAKE2b-256 21dd3cff55b0518866a2299ccd47bac04f8920c9e48ba530ed73bd5c59a31f84

See more details on using hashes here.

File details

Details for the file torchrl_nightly-2025.9.1-cp310-cp310-macosx_10_9_universal2.whl.

File metadata

File hashes

Hashes for torchrl_nightly-2025.9.1-cp310-cp310-macosx_10_9_universal2.whl
Algorithm Hash digest
SHA256 0bdab2063e91cd3bca741219f50f01fd4d528479cfca1a9d80a3d6a722b00aef
MD5 5045413618de164e25c3eacc4667d284
BLAKE2b-256 106312d4135c5e60c44dae0d86ca384d73b9c73dae6512a5bf668daae063a5bd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page