RNL - Robot Navigation Learning
Project description
RNL: Robot Navigation Learning Framework
End-to-end Deep Reinforcement Learning for Real-World Robotics Navigation in PyTorch
Overview
RNL (Robot Navigation Learning) is a comprehensive framework for training autonomous robots to navigate in unknown environments using Deep Reinforcement Learning (DRL). The framework provides a complete pipeline from environment simulation to model deployment, with support for both training and inference phases.
Demo
Key Features
- 3D environment
- Wandb integration
- LIDAR simulation
- Robot parameters
- Differential drive
- Physics simulation
- Dynamic maps
- LLM integration
Installation
Prerequisites
- Python 3.10+
- PyTorch 2.5.1+
- CUDA (optional, for GPU acceleration)
Install from PyPI
pip install rnl
Quick Start
1. Basic Training Example
import numpy as np
import rnl as vault
# Configure robot parameters
robot_config = vault.robot(
base_radius=0.105, # Robot radius in meters
max_vel_linear=0.22, # Maximum linear velocity
max_vel_angular=2.84, # Maximum angular velocity
wheel_distance=0.16, # Distance between wheels
weight=1.0, # Robot weight in kg
threshold=1.0, # Obstacle detection threshold
collision=0.5, # Collision radius
path_model="None" # Path to pretrained model
)
# Configure LIDAR sensor
sensor_config = vault.sensor(
fov=2 * np.pi, # Field of view (360 degrees)
num_rays=20, # Number of LIDAR rays
min_range=0.0, # Minimum detection range
max_range=6.0 # Maximum detection range
)
# Configure environment
env_config = vault.make(
scale=100, # Environment scale
folder_map="None", # Custom map folder
name_map="None", # Custom map name
max_timestep=10000 # Maximum episode length
)
# Configure rendering
render_config = vault.render(
controller=False, # Disable manual control (set True to control robot with arrow keys)
debug=True, # Enable debug visualization
plot=False # Disable plotting
)
# Initialize trainer
trainer = vault.Trainer(robot_config, sensor_config, env_config, render_config)
# Start training
trainer.learn(
max_timestep_global=3000000, # Total training steps
seed=1, # Random seed
batch_size=1024, # Training batch size
hidden_size=128, # Neural network hidden size
num_envs=4, # Parallel environments
device="cuda", # Training device
checkpoint=10000, # Checkpoint frequency
use_wandb=True, # Enable Wandb logging
lr=0.0003, # Learning rate
learn_step=512, # Learning step size
gae_lambda=0.95, # GAE lambda parameter
ent_coef=0.0, # Entropy coefficient
vf_coef=0.5, # Value function coefficient
max_grad_norm=0.5, # Gradient clipping
update_epochs=10, # Update epochs
name="navigation_model" # Model name wandb
)
2. Inference Example
import rnl as vault
# Use same configuration as training
# ... (robot_config, sensor_config, env_config, render_config)
# Initialize simulation
simulation = vault.Simulation(robot_config, sensor_config, env_config, render_config)
# Run inference
simulation.run()
3. Demo Mode
python main.py -m sim
Advanced Features
LLM Integration
The framework supports Large Language Model integration for automated reward engineering:
trainer.learn(
use_llm=True,
llm_api_key="your_api_key",
population=2,
loop_feedback=10,
description_task="reach the goal while avoiding obstacles efficiently"
)
Parallel Training
The framework supports multi-environment parallel training for faster convergence:
trainer.learn(
num_envs=8, # Number of parallel environments
device="cuda", # Use GPU acceleration
batch_size=2048 # Larger batch size for parallel training
)
Troubleshooting
Common Issues
-
CUDA Out of Memory
- Reduce
batch_sizeornum_envs - Use smaller
hidden_size
- Reduce
-
Slow Training
- Increase
num_envsfor parallel training - Use GPU acceleration
- Optimize
learn_stepparameter
- Increase
-
Unstable Training
- Adjust learning rate (
lr) - Increase
update_epochs
- Adjust learning rate (
Contributing
We welcome contributions! Please see our Contributing Guide for details.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support
- Issues: GitHub Issues
- Email: grottimeireles@gmail.com
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file rnl-1.1.1.tar.gz.
File metadata
- Download URL: rnl-1.1.1.tar.gz
- Upload date:
- Size: 3.4 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0049e6ae26f98ccdbf9a4cf06f00dc82a9263bee45ceb1f2894eb3f0ff358a27
|
|
| MD5 |
f3d66c56d098c01da765f726ba1c634a
|
|
| BLAKE2b-256 |
9a7b3ae1d0c945b9019d8f33c42dc2ed4ac91e92e8a69f10854289ec38eb7dfe
|
File details
Details for the file rnl-1.1.1-py3-none-any.whl.
File metadata
- Download URL: rnl-1.1.1-py3-none-any.whl
- Upload date:
- Size: 44.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cd18845bf691645b82e2f614b040adc38b601c4add77c54f25a06d30292ae520
|
|
| MD5 |
f2784815b07516b05f26fef295c28a99
|
|
| BLAKE2b-256 |
5b9ec3aecf096477fee503ba48cc4a1d1abe4d5f65ec6e1243e482f17fd764f7
|