Skip to main content

RouteRL is a multi-agent reinforcement learning framework for urban route choice in different city networks. This subpackage is developed to support its compatibility with URB until the full integration.

Project description

RouteRL

Multi-Agent Reinforcement Learning framework for modeling and simulating the collective route choices of humans and autonomous vehicles.

Tutorial Tests Online Documentation License PyPI Version Coverage Open in Code Ocean


RouteRL is a novel framework that integrates Multi-Agent Reinforcement Learning (MARL) with a microscopic traffic simulation, SUMO, facilitating the testing and development of efficient route choice strategies. The proposed framework simulates the daily route choices of driver agents in a city, including two types:

  • human drivers, emulated using discrete choice models,
  • and AVs, modeled as MARL agents optimizing their policies for a predefined objective.

RouteRL aims to advance research in MARL, traffic assignment problems, social reinforcement learning (RL), and human-AI interaction for transportation applications.

For overview see the paper and for more details, check the documentation online.

RouteRL usage and functionalities at a glance

The following is a simplified code of a possible standard MARL algorithm implementation via TorchRL.

env = TrafficEnvironment(seed=42, **env_params) # initialize the traffic environment

env.start() # start the connection with SUMO

for episode in range(human_learning_episodes): # human learning 
    env.step()

env.mutation() # some human agents transition to AV agents

collector = SyncDataCollector(env, policy, ...)  # collects experience by running the policy in the environment (TorchRL)

# training of the autonomous vehicles; human agents follow fixed decisions learned in their learning phase
for tensordict_data in collector:
        
    # update the policies of the learning agents
    for _ in range(num_epochs):
      subdata = replay_buffer.sample()
      loss_vals = loss_module(subdata)

      optimizer.step()
    collector.update_policy_weights_()

policy.eval() # set the policy into evaluation mode

# testing phase using the already trained policy
num_episodes = 100
for episode in range(num_episodes):
    env.rollout(len(env.machine_agents), policy=policy)
 
env.plot_results() # plot the results
env.stop_simulation() # stop the connection with SUMO

Documentation

Installation

  • Prerequisite: Make sure you have SUMO installed in your system. This procedure should be carried out separately, by following the instructions provided here.
  • Option 1: Install the latest stable version from PyPI:
      pip install routerl
    
  • Option 2: Clone this repository for latest version, and manually install its dependencies:
      git clone https://github.com/COeXISTENCE-PROJECT/RouteRL.git
      cd RouteRL
      pip install -r requirements.txt
    

Reproducibility capsule

We have an experiment script encapsulated in a CodeOcean capsule. This capsule allows demonstrating RouteRL's capabilities without the need for SUMO installation or dependency management.

  1. Visit the capsule link.
  2. Create a free CodeOcean account (if you don’t have one).
  3. Click Reproducible Run to execute the code in a controlled and reproducible environment.

Credits

RouteRL is part of COeXISTENCE (ERC Starting Grant, grant agreement No 101075838) and is a team work at Jagiellonian University in Kraków, Poland by: Ahmet Onur Akman and Anastasia Psarou (main contributors) supported by Grzegorz Jamroz, Zoltán Varga, Łukasz Gorczyca, Michał Hoffmann and others, within the research group of Rafał Kucharski.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

routerlurb-1.0.0.tar.gz (2.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

routerlurb-1.0.0-py3-none-any.whl (2.4 MB view details)

Uploaded Python 3

File details

Details for the file routerlurb-1.0.0.tar.gz.

File metadata

  • Download URL: routerlurb-1.0.0.tar.gz
  • Upload date:
  • Size: 2.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.6

File hashes

Hashes for routerlurb-1.0.0.tar.gz
Algorithm Hash digest
SHA256 a67176ec21bff2182cef4fae33423821109ee5aa2f2abea0f1d39ede41c7628f
MD5 4a222ac9b945dbd0114a453b43fe4c17
BLAKE2b-256 d0aecfa20b9fc816c1a11822c503385aadf675418306ef5f7812d19e9ea9a27d

See more details on using hashes here.

File details

Details for the file routerlurb-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: routerlurb-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 2.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.6

File hashes

Hashes for routerlurb-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 31222de10a597f2978fc0f358a8cfb26f6434758102e052ebebd9041618e413d
MD5 6bc20998db0a0579d198a59ca798a552
BLAKE2b-256 92066b4a154a539ebefad7a2c84a51b070334fbdc1f09de5a5b154934bdee1df

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page