Skip to main content

A reinforcement learning environment for HiFight's Footsies game

Project description

FootsiesGym

Implementation of HiFight's Footsies game as a reinforcement learning environment. This environment serves as a benchmark for multi-agent reinforcement learning in a (relatively) complex two-player zero-sum game.

The environment is derived from the open-source Unity implementation, which has been augmented to run a gRPC server that can be controlled through a Python harness. Training is implemented using Ray's RLlib.

System Architecture

sequenceDiagram
    participant RLlib as Ray RLlib
    participant Env as FootsiesEnv
    participant gRPC as gRPC Client
    participant Server as Unity Game Server
    participant Game as Footsies Game

    Note over RLlib,Env: Python Environment
    Note over gRPC: Communication Layer
    Note over Server,Game: Unity Game

    RLlib->>Env: step(action)
    Env->>gRPC: SendAction(action)
    gRPC->>Server: gRPC Request
    Server->>Game: Update Game State
    Game->>Server: Game State
    Server->>gRPC: gRPC Response
    gRPC->>Env: Game State
    Env->>RLlib: (obs., rews., terms., truncs., infos)

    Note over RLlib,Game: Training Loop

The diagram above shows how the different components interact during training:

  1. RLlib sends actions to the FootsiesEnv
  2. The environment converts these actions into gRPC requests
  3. The Unity Game Server processes the actions and updates the game state
  4. The game state is sent back through gRPC to the environment
  5. The environment processes the observation and returns it to RLlib

Installation

conda create -n footsiesgym python=3.10
conda activate footsiesgym
pip install -r requirements.txt

On a Mac, you may need to ensure you have cmake installed. You can install it using Homebrew:

brew install cmake

Training

Game Servers

If you are on a Linux system, run setup.sh to unpack the binaries then run skip to the training procedure. Otherwise, follow the steps below.

Before training, you'll need to launch the headless game servers. Scripts are provided to do so in scripts/start_local_{mac, linux}_servers.sh, but you must first unpack the binaries that are included into the binaries/ directory (the launch scripts assume this location). Important! If you are launching game servers manually, be sure to set launch_binaries to False in the environment configuration.

./scripts/start_local_{mac, linux}_servers.sh <num-train-servers> <num-eval-servers>

The two arguments correspond to num_env_runners and evaluation_num_env_runners, which can be specified in the experiment configuration. You must launch a corresponding number of servers for each. If you are running local debugging (see below; python -m experiments.train --debug), just launch one of each. If you're launching a full experiment, you'll need to match the number specified in the experiment configuration (defaults to 40 training and 5 evaluation env runners).

The scripts will start:

  • Training servers from port 50051 (incrementing for each server)
  • Evaluation servers from port 40051 (incrementing for each server)

Importantly, we map environment runners to a single port, which means that you can only run a single environment per environment runner.

Training Configuration

The default training utilizes the APPO algorithm (see the corresponding IMPACT paper). We also utilize a vanilla LSTM newtwork with parameters described in the respective experiment files.

Training can utilize either the new RLModule stack or old-stack in RLlib. Some functionality has yet to be implemented in the new stack (see open issues).

Old Stack

python -m experiments.train --experiment-name <experiment-name>

New Stack

python -m experiments.train_rlmodule --experiment-name <experiment-name>

Add the --debug flag to use only a single env runner (and single evaluation env runner) and local mode. This will enable breakpoint usage for local debugging.

Visualizing a Policy

To visualize gameplay:

  1. Unpack the windowed build binaries of your choice (Mac or Linux).

  2. Add the trained policy specification to the ModuleRepository in components/module_repository.py:

FootsiesModuleSpec(
    module_name="<policy-nickname>",
    experiment_name="<experiment-name>",
    trial_id="<trial-id>",  # specify if experiment has multiple trials
    checkpoint_number=-1,  # -1 for latest, otherwise specify checkpoint number
)
  1. Run the game with:
./footsies_linux_windowed_021725 --port 80051
  1. Configure policies in scripts/local_inference.py using the MODULES variable. Set "p1" to "human" to play against the AI (must install pygame).

Project Architecture

Core Components

  • Environment (footsies/): The main game environment implementation that interfaces with the Unity game through gRPC.
  • Models (models/): Neural network architectures for the RL agents
  • Experiments (experiments/): Training configurations and experiment management
  • Callbacks (callbacks/): Custom RLlib callbacks for monitoring and evaluation
  • Components (components/): Reusable components like the module repository for policy management
  • Utils (utils/): Utility functions and helper classes
  • Scripts (scripts/): Helper scripts for server management and visualization

Key Features

  • Multi-agent reinforcement learning environment
  • gRPC-based communication with Unity game server
  • Support for both headless and windowed game modes
  • Integration with Ray RLlib for distributed training
  • Custom LSTM-based policy networks
  • Support for self-play training
  • Evaluation against baseline policies (random, noop, back)
  • Wandb integration for experiment tracking

Development

gRPC / Protobuf Updates

If updating the proto definitions:

  1. Generate C# files (Windows):
.\protoc\bin\protoc.exe --csharp_out=.\env\game\proto\ --grpc_out=.\env\game\proto\ --plugin=protoc-gen-grpc=.\plugins\grpc_csharp_plugin.exe .\env\game\proto\footsies_service.proto
  1. Generate Python files:
python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. .\env\game\proto\footsies_service.proto

Project Structure

FootsiesGym/
├── binaries/           # Game server binaries
├── callbacks/          # RLlib callbacks
├── components/         # Reusable components
├── experiments/        # Training configurations
├── footsies/          # Core environment
├── models/            # Neural network architectures
├── protoc/            # Protocol buffer tools
├── scripts/           # Helper scripts
├── testing/           # Test files
└── utils/             # Utility functions

Contributing

  1. Install pre-commit hooks to maintain code quality
  2. Follow the existing code style and architecture
  3. Add tests for new features
  4. Update documentation as needed

License

This project is based on the open-source Footsies game by HiFight. Please refer to the original game's license for more information.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

footsies_gym-0.4.2.tar.gz (68.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

footsies_gym-0.4.2-py3-none-any.whl (85.2 kB view details)

Uploaded Python 3

File details

Details for the file footsies_gym-0.4.2.tar.gz.

File metadata

  • Download URL: footsies_gym-0.4.2.tar.gz
  • Upload date:
  • Size: 68.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.18

File hashes

Hashes for footsies_gym-0.4.2.tar.gz
Algorithm Hash digest
SHA256 63b611affc1e0ae357ba8b43060fcf5348087af83d5d8cd408b0ad34a1d4f6b6
MD5 6e62a46e699cba0f13ef70112bb2cd6f
BLAKE2b-256 307d60fa1f6571cc8b7828e54c8c2fda7ab79d21638701c1941739dfc7f0befb

See more details on using hashes here.

File details

Details for the file footsies_gym-0.4.2-py3-none-any.whl.

File metadata

  • Download URL: footsies_gym-0.4.2-py3-none-any.whl
  • Upload date:
  • Size: 85.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.18

File hashes

Hashes for footsies_gym-0.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f823775e240fadc5a71ffbf0a86b4060f9f3d1d786bc8d16b0236848dbcdb478
MD5 3aba5969a342dd5166f260cc15e9bee2
BLAKE2b-256 0b1397fcc0b578e8aa14da68b8fd127e5b52a645ce8a48f49c3f6e59280338bb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page