Skip to main content

Paper - Pytorch

Project description

Multi-Modality

Opioid RL

Join our Discord Subscribe on YouTube Connect on LinkedIn Follow on X.com

OpioidRL is a cutting-edge reinforcement learning (RL) library that simulates drug addiction behaviors within RL agents. Inspired by the addictive properties of drugs like methamphetamine and crack cocaine, OpioidRL offers a unique environment where agents experience reward dependency, high-risk decision-making, and compulsive behaviors — pushing RL research into new and provocative territories.

Features

  • Meth Simulation: Models the erratic and compulsive high-risk behaviors typically seen in methamphetamine addiction.
  • Crack Simulation: Models the short-term, intense craving for rewards, leading to aggressive reward-seeking behaviors.
  • Customizable Reward Loops: Easily adjust the reinforcement pathways to mimic varying levels of addiction, from mild dependency to extreme compulsion.
  • Addiction Dynamics: Introduces tolerance, withdrawal, and relapse phenomena, simulating real-world addiction cycles.
  • Compatible with Any RL Framework: Easily integrate OpioidRL with popular RL frameworks like PyTorch, TensorFlow, and Stable Baselines3.

Installation

You can install OpioidRL using pip:

pip install opioidrl

Quick Start

Below is a simple example of how to integrate OpioidRL into your RL pipeline.

import opioidrl
from stable_baselines3 import PPO

# Create a Crack environment
env = opioidrl.make('Crack-v0')

# Train the agent using PPO
model = PPO('MlpPolicy', env, verbose=1)
model.learn(total_timesteps=100000)

# Test the agent
obs = env.reset()
for _ in range(1000):
    action, _states = model.predict(obs)
    obs, rewards, dones, info = env.step(action)
    env.render()

Example: Meth Environment

import opioidrl
from stable_baselines3 import A2C

# Create a Meth environment
env = opioidrl.make('Meth-v0')

# Train the agent using A2C
model = A2C('MlpPolicy', env, verbose=1)
model.learn(total_timesteps=100000)

# Evaluate agent behavior
obs = env.reset()
for _ in range(1000):
    action, _states = model.predict(obs)
    obs, rewards, dones, info = env.step(action)
    env.render()

Available Environments

OpioidRL currently offers two environments simulating different types of addiction:

  1. Crack-v0: Fast and intense, simulates the short-term, high-risk reward-seeking behaviors common in crack cocaine addiction.
  2. Meth-v0: More sustained compulsive behaviors, with agents showing an increasing tolerance and willingness to take extreme actions for delayed rewards.

Environment Customization

You can modify the parameters of each environment to simulate different levels of addiction severity:

env = opioidrl.make('Meth-v0', tolerance_increase_rate=0.01, withdrawal_penalty=5)

Configuration Options

  • tolerance_increase_rate: How fast the agent builds tolerance to rewards.
  • withdrawal_penalty: The penalty imposed when the agent doesn't receive its expected reward.
  • relapse_probability: The probability that an agent will fall back into compulsive behaviors after overcoming addiction.

Roadmap

  • Opioid-v0: A new environment simulating opioid addiction with prolonged reward dependency and extreme withdrawal effects.
  • Alcohol-v0: An environment simulating long-term, mild addiction behaviors with subtle but persistent effects on decision-making.
  • Nicotine-v0: Simulating the reward-seeking behavior tied to nicotine addiction, with frequent, small rewards.

Contributing

Contributions are welcome! If you have ideas for new environments or features, feel free to submit a pull request or open an issue.

Steps to Contribute:

  1. Fork this repository.
  2. Create a new branch: git checkout -b feature-name
  3. Commit your changes: git commit -m 'Add new feature'
  4. Push to the branch: git push origin feature-name
  5. Submit a pull request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Disclaimer

OpioidRL is a research tool designed for educational and experimental purposes. The behaviors simulated within this library are based on abstract models of addiction and are not intended to trivialize or promote drug addiction in any form. Addiction is a serious issue, and if you or someone you know is struggling with addiction, please seek professional help.

Made with ❤️ by the OpioidRL team.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

opioidrl-0.0.2.tar.gz (8.3 kB view details)

Uploaded Source

Built Distribution

opioidrl-0.0.2-py3-none-any.whl (9.4 kB view details)

Uploaded Python 3

File details

Details for the file opioidrl-0.0.2.tar.gz.

File metadata

  • Download URL: opioidrl-0.0.2.tar.gz
  • Upload date:
  • Size: 8.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.3 Darwin/23.3.0

File hashes

Hashes for opioidrl-0.0.2.tar.gz
Algorithm Hash digest
SHA256 e2f98f0eb46231953ac7bc77083d40286770921222f5a8e65293f13375369456
MD5 d5c6a421de082fd2a1509239f2a9e036
BLAKE2b-256 71f966d0594c69841cc8b6039de77137e80c149873c2b39c1bcc070025a3b850

See more details on using hashes here.

File details

Details for the file opioidrl-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: opioidrl-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 9.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.3 Darwin/23.3.0

File hashes

Hashes for opioidrl-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 682a9557b685c8f9af18fcde61fcb2796aeb6c67076e6aa42d22c109186b4b04
MD5 6300a063258af31848bb32a1b4532bef
BLAKE2b-256 8fbf0a05f0e3cf1bb5b5c67b717da3ae802488c87591bca91d03c6a5d4ed9eb5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page