An open source library for connecting AnyLogic models with Reinforcement Learning frameworks through OpenAI Gymnasium
Project description
ALPypeRL
ALPypeRL or AnyLogic Python Pipe for Reinforcement Learning is an open source library for connecting AnyLogic simulation models with reinforcement learning frameworks that are compatible with OpenAI Gymnasium interface (single agent).
With ALPypeRL you will be able to:
- Connect your AnyLogic model to a reinforcement learning framework of your choice (e.g. ray
rllib
). - Scale your training by launching many AnyLogic models simultaneously (requires an exported model).
- Deploy and evaluate your trained policy from AnyLogic.
- Debug your AnyLogic models during training (this is a special feature unique to ALPypeRL that improves the user experience during model debugging remarkably).
- Identify and replicate failed runs by having control on the seed used for each run.
- Leverage on the AnyLogic rich visualization while training or evaluating.
There is a more comprehensive documentation available that includes numerous examples to help you understand the basic functionalities in greater detail.
No licence is required for single instance experiments. AnyLogic PLE is free!.
NOTE: ALPypeRL has been developed using ray rllib as the base RL framework. Ray rllib is an industry leading open source package for Reinforcement Learning. Because of that, ALPypeRL has certain dependencies to it (e.g. trained policy deployment and evaluation).
Environments
ALPypeRL includes 2 environments that make the connection between AnyLogic and your python script possible:
- ALPypeRLConnector - The AnyLogic connector ('agent') library to be dropped into your simulation model.
- alpyperl - The library that you will use after configuring your policy in your python script to connect to the AnyLogic model (includes functionalities to train and evaluate).
Installation
To install the base ALPypeRL library in python, use pip install alpyperl
.
To use ALPypeRLConnector in AnyLogic, you can add the library to your Palette. That will allow you to drag and drop the connector into your model. Note that further instructions are required to be followed in order for the connector to work.
Requirements
- The ALPypeRL requires you to have the AnyLogic software (or a valid exported model). AnyLogic is a licensed software for building simulations that includes an ample variety of libraries for modelling many industry challenges. At the moment, AnyLogic provides a free license under the name PLE (Personal Learning Edition). There are other options available. For more information, you can visit the AnyLogic website.
Note: This is not a package that is currently backed by the AnyLogic support team.
-
The python package
alpyperl
requires (among others) 4 packages that are relatively heavy (and might take longer times to install depending on the host machine specs):ray
ray[rllib]
tensorflow
torch
API basics
Training
To be able to train your policy, you must have the following:
- An AnyLogic model that requires decisions to be taken as the simulation runs. Using the CartPole-v0 example, a decision must be taken on the direction of the force to be applied so the pole can be kept straight for as long as possible. For that, the AnyLogic model will be making requests to the ALPypeRLConnector and consuming the returned/suggested action.
- A python script that contains the RL framework. Here is where the policy is going to be trained. For that, you will need to create your custom environment taking into consideration what your AnyLogic model expects to return and receive. By default, you must define the action and observation spaces. Please visit the CartPole-v0 example for a more detailed explanation.
from alpyperl import AnyLogicEnv
from ray.rllib.algorithms.ppo import PPOConfig
# Set checkpoint directory.
checkpoint_dir = "./resources/trained_policies/cartpole_v0"
# Initialize policy.
policy = (
PPOConfig()
.env_runners(
num_env_runners=2,
num_envs_per_env_runner=2
)
.fault_tolerance(
recreate_failed_env_runners=True,
num_consecutive_env_runner_failures_tolerance=3
)
.environment(
AnyLogicEnv,
env_config={
'run_exported_model': True,
'exported_model_loc': './resources/exported_models/cartpole_v0',
'show_terminals': False,
'verbose': False,
'checkpoint_dir': checkpoint_dir,
'env_params': {
'cartMass': 1.0,
'poleMass': 0.1,
'poleLength': 0.5,
}
}
)
.build()
)
# Perform training.
for _ in range(100):
result = policy.train()
# Save policy checkpoint.
policy.save(checkpoint_dir)
print(f"Checkpoint saved in directory '{checkpoint_dir}'")
# Close all enviornments.
# NOTE: This is required to be called for correct checkpoint saving by ALPypeRL.
policy.stop()
Evaluation
The evaluation of your trained policy is made simple in alpyperl. See the example:
from alpyperl.serve.rllib import launch_policy_server
from alpyperl import AnyLogicEnv
from ray.rllib.algorithms.ppo import PPOConfig
# Load policy and launch server.
launch_policy_server(
policy_config=PPOConfig(),
env=AnyLogicEnv,
trained_policy_loc='./resources/trained_policies/cartpole_v0',
port=3000
)
Once the server is on, you can run your AnyLogic model and test your trained policy. You are expected to select mode EVALUATE and specify the server url.
Bugs and/or development roadmap
At the moment, ALPypeRL is at its earliest stage. You can join the alpyperl project and raise bugs, feature requests or submit code enhancements via pull request.
Support ALPypeRL's development
If you are financially able to do so and would like to support the development of ALPypeRL, please reach out to marcescandellmari@gmail.com.
License
The ALPypeRL software suite is licensed under the terms of the Apache License 2.0. See LICENSE for more information.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file alpyperl-1.1.2.tar.gz
.
File metadata
- Download URL: alpyperl-1.1.2.tar.gz
- Upload date:
- Size: 28.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 colorama/0.4.4 importlib-metadata/6.8.0 keyring/23.13.1 pkginfo/1.8.2 readme-renderer/34.0 requests-toolbelt/0.9.1 requests/2.31.0 rfc3986/1.5.0 tqdm/4.57.0 urllib3/1.26.5 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ed47ad5e628dee6fac41a5c73e9c1a740d9f0c7798a8ef269c39ba4abc64701a |
|
MD5 | f67de796301fd8f7ca4b136e1efc7a66 |
|
BLAKE2b-256 | 17728b50232ae5b59dd27f812131c5559770d01e2e4d24236cb72e205d17cc6a |
File details
Details for the file alpyperl-1.1.2-py3-none-any.whl
.
File metadata
- Download URL: alpyperl-1.1.2-py3-none-any.whl
- Upload date:
- Size: 22.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 colorama/0.4.4 importlib-metadata/6.8.0 keyring/23.13.1 pkginfo/1.8.2 readme-renderer/34.0 requests-toolbelt/0.9.1 requests/2.31.0 rfc3986/1.5.0 tqdm/4.57.0 urllib3/1.26.5 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 27715805f5c157e57543900ea3b9ea862c6787fc3b125a2ea653fd254671bccf |
|
MD5 | 7edbdae5f3ec1a29dfe1ae471b1fc66c |
|
BLAKE2b-256 | 93ddcf2080f5f7852563c3aede1506632b2260b791fb09e4ab294f945a0c38a5 |