Skip to main content

RouteRL is a multi-agent reinforcement learning framework for urban route choice in different city networks. This subpackage is developed to support its compatibility with URB until the full integration.

Project description

RouteRL

Multi-Agent Reinforcement Learning framework for modeling and simulating the collective route choices of humans and autonomous vehicles.

Tutorial Tests Online Documentation License PyPI Version Coverage Open in Code Ocean


RouteRL is a novel framework that integrates Multi-Agent Reinforcement Learning (MARL) with a microscopic traffic simulation, SUMO, facilitating the testing and development of efficient route choice strategies. The proposed framework simulates the daily route choices of driver agents in a city, including two types:

  • human drivers, emulated using discrete choice models,
  • and AVs, modeled as MARL agents optimizing their policies for a predefined objective.

RouteRL aims to advance research in MARL, traffic assignment problems, social reinforcement learning (RL), and human-AI interaction for transportation applications.

For overview see the paper and for more details, check the documentation online.

RouteRL usage and functionalities at a glance

The following is a simplified code of a possible standard MARL algorithm implementation via TorchRL.

env = TrafficEnvironment(seed=42, **env_params) # initialize the traffic environment

env.start() # start the connection with SUMO

for episode in range(human_learning_episodes): # human learning 
    env.step()

env.mutation() # some human agents transition to AV agents

collector = SyncDataCollector(env, policy, ...)  # collects experience by running the policy in the environment (TorchRL)

# training of the autonomous vehicles; human agents follow fixed decisions learned in their learning phase
for tensordict_data in collector:
        
    # update the policies of the learning agents
    for _ in range(num_epochs):
      subdata = replay_buffer.sample()
      loss_vals = loss_module(subdata)

      optimizer.step()
    collector.update_policy_weights_()

policy.eval() # set the policy into evaluation mode

# testing phase using the already trained policy
num_episodes = 100
for episode in range(num_episodes):
    env.rollout(len(env.machine_agents), policy=policy)
 
env.plot_results() # plot the results
env.stop_simulation() # stop the connection with SUMO

Documentation

Installation

  • Prerequisite: Make sure you have SUMO installed in your system. This procedure should be carried out separately, by following the instructions provided here.
  • Option 1: Install the latest stable version from PyPI:
      pip install routerlurb
    
  • Option 2: Clone this repository for latest version, and manually install its dependencies:
      git clone https://github.com/COeXISTENCE-PROJECT/RouteRL.git
      cd RouteRL
      pip install -r requirements.txt
    

Reproducibility capsule

We have an experiment script encapsulated in a CodeOcean capsule. This capsule allows demonstrating RouteRL's capabilities without the need for SUMO installation or dependency management.

  1. Visit the capsule link.
  2. Create a free CodeOcean account (if you don’t have one).
  3. Click Reproducible Run to execute the code in a controlled and reproducible environment.

Credits

RouteRL is part of COeXISTENCE (ERC Starting Grant, grant agreement No 101075838) and is a team work at Jagiellonian University in Kraków, Poland by: Ahmet Onur Akman and Anastasia Psarou (main contributors) supported by Grzegorz Jamroz, Zoltán Varga, Łukasz Gorczyca, Michał Hoffmann and others, within the research group of Rafał Kucharski.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

routerlurb-1.1.0.tar.gz (2.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

routerlurb-1.1.0-py3-none-any.whl (2.4 MB view details)

Uploaded Python 3

File details

Details for the file routerlurb-1.1.0.tar.gz.

File metadata

  • Download URL: routerlurb-1.1.0.tar.gz
  • Upload date:
  • Size: 2.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for routerlurb-1.1.0.tar.gz
Algorithm Hash digest
SHA256 a00cd680a0e5d292596c881af61c6a9a177bff01307f8db111a0b698893788b3
MD5 256fbf0cb5aaec7cc0b14b2a1488b5db
BLAKE2b-256 f9f2a285399409f156c62c741ac89cf80798bd89f6308fcdfba6d6a47ee0a923

See more details on using hashes here.

File details

Details for the file routerlurb-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: routerlurb-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 2.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for routerlurb-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 fa7dc3771b0e38d6739b5c169cb3b3e80275f16142fa391735a261f34406625a
MD5 96eb4156f5881f435e6b6752ab0238fa
BLAKE2b-256 41c4cbda73f3d2b7f7e51ba0e0f8a6abf506430fb02af7fcc148487d2519994a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page