A Unified Framework of Adversarial Attacks for Robustness of Deep Reinforcement Learning
Project description
Adaro-RL
ADARO-RL
A unified framework of Adversarial Attacks for Robust Reinforcement Learning.
📚 Table of Contents
💾 Installation
Use python 3.10
git clone git@github.com:IRT-SystemX/adaro-rl.git
pip install -e adaro-rl
📦 Content of the Library
adaro_rl/
├── attacks/
├── pipelines/
├── wrappers/
├── zoo/
└── __init__.py
src/adaro_rl/attacks/contains the implementation of many adversarial attacks in a unified framework.src/adaro_rl/pipelines/contains the implementation of scripts to train, test, attack, and adversarially train agents.src/adaro_rl/wrappers/contains the implementations of wrappers of gymnasium environments to apply attacks from different perspectives (the agent or the adversary) and with different supports for perturbation (observations or environment).src/adaro_rl/zoo/contains the implementation of tools and functionality to run the scripts.
🛡️ Adversarial Attacks
Available Methods
Random Attacks
RandomUniformAttackorRUA: generates perturbation with random uniform sampling.RandomNormalAttackorRNA: generates perturbation with random normal sampling.RandomSignedAttackorRSA: generates perturbation with random sampling between -1 and 1.
Fast Gradients Method Based Attacks
FGM_DforFastGradientMethod_DiscreteAction: generates perturbation with fast gradient method to attack discrete action policies.FGM_CforFastGradientMethod_ContinuousAction: generates perturbation with fast gradient method to attack continuous action policies.FGM_VforFastGradientMethod_V_Critic: generates perturbation with fast gradient method to attack value critics.FGM_QCforFastGradientMethod_Q_Critic: generates perturbation with fast gradient method to attack the q critic of a q actor-critic agent.FGM_QACforFastGradientMethod_Q_ActorCritic: generates perturbation with fast gradient method to attack both models of a q actor-critic agent.
Fast Gradients Method Based Attacks
FGSM_Dfor the sign version ofFGM_D.FGSM_Cfor the sign version ofFGM_C.FGSM_Vfor the sign version ofFGM_V.FGSM_QCfor the sign version ofFGM_QC.FGSM_QACfor the sign version ofFGM_QAC.
Parameters of the Attacks
The common parameters for all attacks are the following:
epsdefines the amount of perturbation applied with an attack.normdefines the norm of the delta. Options: 0, 1, 2, ... inf.max_epsdefines the maximum amount allowed on each dimension of the perturbation. It can be a number, the same for each dimension. It can also be an array, defining a specific amount of perturbation for each dimension. By default, max_eps is set as delta.is_proportional_maskis an array of booleans that defines for each dimension, if the perturbation has to be applied proportionally to the value of the support on which the perturbation is applied. By default, is_prop_perturb_mask is None.
Other parameters for gradient attacks are the following:
targetdefines the target followed by the attack. Options: 'untargeted', 'targeted', 'min', 'max' or 'target_fct'. Default is 'untargeted'.
🛠️ Usage
Usage of Attacks Only
The first way to use the ADARO-RL library is to directly apply the attack within your infrastructure with your training and testing scripts.
from adaro_rl.attacks import make_attack
attack = make_attack(attack_name, **attack_kwargs)
perturbation = attack.generate_perturbation(observation)
adv_obs = observation + perturbation
- Here, use one of the
attack_namelisted inadaro_rl.attacks.attack_names. - Add the
attack_kwargsneeded for the attack_name specified. - Use an
observationof your environment|dataset.
Usage with the Scripts
The second way to use the ADARO-RL library is to use the application available in adaro_rl in command line. This enables to train, test, attack, and adversarial train agents, it is needed to choose a configuration of an existing use case provided in the adaro_rl.zoo, add your configurations in the zoo, or provide a new zoo module to use.
Command Lines
Here are some basic command line examples :
adaro_rl train --config 'HalfCheetah-v5' --zoo 'adaro_rl.zoo'
adaro_rl test --config 'HalfCheetah-v5' --zoo 'adaro_rl.zoo'
adaro_rl online_attack --config 'HalfCheetah-v5' --zoo 'adaro_rl.zoo' --attack-name 'FGM_D' --eps 0.01 --norm "2"
The different pipelines that can be executed by the application are the following:
traintestonline_attackadversarial_train
Use any given script with the option --help or check the source codes available in adaro-rl/pipelines to see how to use them.
Python
You can also use the scripts directly in Python. Here are some examples equivalent to the ones presented above in command lines:
import adaro_rl.zoo as zoo
config = zoo.configs['HalfCheetah-v5']
adaro_rl.train(config=config)
adaro_rl.test(config=config)
adaro_rl.online_attack(config=config, attack_name='FGM_D', eps=0.1, norm=2)
Examples
Example scripts and jupyter notebooks are available in examples folder of the repository. You will find there :
- Attacks examples and
examples/attacks.py: a jupyter notebook and a python script showing the usages of the attacks available. - Pipelines examples and
examples/pipelines.sh: a jupyter notebook and a bash script showing the usages of the high level pipelines available.
🛠️ Documentation
docs/contains the documentation the library
To build the HTML documentation:
cd docs
make html
The output will be in:
docs/build/html/index.html
Open this file in a browser to view the docs locally.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file adaro_rl-0.3.0.tar.gz.
File metadata
- Download URL: adaro_rl-0.3.0.tar.gz
- Upload date:
- Size: 10.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
950616ccfb5888f12c22450471ae430dfe3df54321d7f1bea290b4574d8695e9
|
|
| MD5 |
b08ae0dfd7b2c9f390a175ae682c7813
|
|
| BLAKE2b-256 |
0b132bc1ac7fd27821dd2368a127d50ce1f880a95ba25874c2dea3043ed57736
|
Provenance
The following attestation bundles were made for adaro_rl-0.3.0.tar.gz:
Publisher:
python_lib_publish.yml on IRT-SystemX/adaro-rl
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
adaro_rl-0.3.0.tar.gz -
Subject digest:
950616ccfb5888f12c22450471ae430dfe3df54321d7f1bea290b4574d8695e9 - Sigstore transparency entry: 1293463759
- Sigstore integration time:
-
Permalink:
IRT-SystemX/adaro-rl@a2bec2d5751107252e8db5788d42bf72b662c680 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/IRT-SystemX
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python_lib_publish.yml@a2bec2d5751107252e8db5788d42bf72b662c680 -
Trigger Event:
workflow_dispatch
-
Statement type: