Reinforcement Learning of Optimal Search strategies
Project description
RL-OptS
Reinforcement Learning of Optimal Search strategies
This library builds the necessary tools needed to study, replicate and develop the results of the paper: “Optimal foraging strategies can be learned and outperform Lévy walks” by G. Muñoz-Gil, A. López-Incera, L. J. Fiderer and H. J. Briegel.
Installation
You can access all these tools installing the python package rl_opts
via Pypi:
pip install rl-opts
You can also opt for cloning the source repository and executing the following on the parent folder you just cloned the repo:
pip install -e rl_opts
This will install both the library and the necessary packages.
Tutorials
We have prepared a series of tutorials to guide you through the most important functionalities of the package. You can find them in the Tutorials folder of the Github repository or in the Tutorials tab of our webpage, with notebooks that will help you navigate the package as well as reproducing the results of our paper via minimal examples. In particular, we have three tutorials:
- Reinforcement learning : shows how to train a RL agent based on Projective Simulation agents to search targets in randomly distributed environments as the ones considered in our paper.
- Imitation learning : shows how to train a RL agent to imitate the policy of an expert equipped with a pre-trained policy. The latter is based on the benchmark strategies common in the literature.
- Benchmarks : shows how launch various benchmark strategies with which to compare the trained RL agents.
Package structure
The package contains a set of modules for:
- Reinforcement
learning framework (
rl_opts.rl_framework
) : building foraging environments as well as the RL agents moving on them. - Learning
and benchmarking (
rl_opts.learn_and_bench
) : training RL agents as well as benchmarking them w.r.t. to known foraging strategies. - Imitation
learning (
rl_opts.imitation
): training RL agents in imitation schemes via foraging experts. - Analytical
functions (
rl_opts.analytics)
: builiding analytical functions for step length distributions as well as tranforming these to foraging policies. - Utils
(
rl_opts.utils)
: helpers used throughout the package.
Cite
We kindly ask you to cite our paper if any of the previous material was useful for your work, here is the bibtex info:
@article{munoz2023optimal,
doi = {10.48550/ARXIV.2303.06050},
url = {https://arxiv.org/abs/2303.06050},
author = {Muñoz-Gil, Gorka and López-Incera, Andrea and Fiderer, Lukas J. and Briegel, Hans J.},
title = {Optimal foraging strategies can be learned and outperform Lévy walks},
publisher = {arXiv},
archivePrefix = {arXiv},
eprint = {2303.06050},
primaryClass = {cond-mat.stat-mech},
year = {2023},
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file rl_opts-0.0.2.tar.gz
.
File metadata
- Download URL: rl_opts-0.0.2.tar.gz
- Upload date:
- Size: 21.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 210a1480d5e60f433532370fb8a34892ff877e12771b060216083e8e0e7c8f15 |
|
MD5 | 520500f32aba9ef36b399a144447b9a4 |
|
BLAKE2b-256 | 12f9b807af6dea9161792e2efd34051c3ed4964f76564363fa0b468516ee783a |
File details
Details for the file rl_opts-0.0.2-py3-none-any.whl
.
File metadata
- Download URL: rl_opts-0.0.2-py3-none-any.whl
- Upload date:
- Size: 21.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d22109c442ee241d0209ece6d0fc012026165d13127849ccd554233214397758 |
|
MD5 | 1499e1cc580096a35657b6b437914456 |
|
BLAKE2b-256 | 01cd6b8674223211b63a2d3e45b788bde1300d57a136c0bd676e347e75ca7519 |