Skip to main content

A Deep Q-Network (DQN) implementation for Atari Space Invaders using Gymnasium and PyTorch.

Project description

Python PyPI License

Deep Q-Network (DQN) for Atari Space Invaders

Agent Playing Space Invaders

A PyTorch implementation of Deep Q-Learning Network (DQN) trained to play Atari Space Invaders using the Arcade Learning Environment (ALE).

Features

  • Vanilla DQN and Dueling DQN architectures.
  • Double DQN support for improved stability.
  • Replay Buffer for experience replay.
  • Epsilon-greedy exploration with annealing (i.e., linear decay).
  • TensorBoard integration for training visualisation.
  • Hugging Face Hub integration for model sharing.
  • Video recording of agent gameplay.

Installation

You can install the package from PyPI or clone the repository and install the required dependencies using Poetry or pip. This project requires Python 3.13.

PyPI

pip install dqn-ale-spaceinvaders

Source

Using Poetry (Recommended)

# 1. Clone the repository
git clone https://github.com/giansimone/dqn-ale-spaceinvaders.git
cd dqn-ale-spaceinvaders

# 2. Initialize environment and install dependencies
poetry env use python3.13
poetry install

# 3. Activate the virtual environment
eval $(poetry env activate)

Using pip

# 1. Clone the repository
git clone https://github.com/giansimone/dqn-ale-spaceinvaders.git
cd dqn-ale-spaceinvaders

# 2. Create and activate a virtual environment
python3.13 -m venv venv
source venv/bin/activate

# 3. Install package in editable mode
pip install -e .

Project Structure

dqn-ale-spaceinvaders/
├── dqn_ale_spaceinvaders/
│    ├── agent.py           # DQN agent implementation
│    ├── buffer.py          # Experience replay buffer
│    ├── config.yaml        # Agent configuration
│    ├── environment.py     # Environment setup and wrappers
│    ├── model.py           # Deep learning architectures
│    ├── train.py           # Training script
│    ├── enjoy.py           # Play with trained agent
│    ├── export.py          # Export model to Hugging Face Hub
│    └── utils.py           # Utility functions
├── .gitignore
├── LICENSE
├── README.md
└── pyproject.toml

Usage

Training

Train a DQN agent with the default configuration.

python -m dqn_ale_spaceinvaders.train

The training script will:

  • Create a timestamped run directory in runs/.
  • Save the configuration, checkpoints, and TensorBoard logs.
  • Periodically evaluate the agent and save the best model.

Configuration

Edit config.yaml to customize training parameters.

# Environment
env_id: ALE/SpaceInvaders-v5
frame_skip: 5
frame_stack: 4
resized_frame: 84

# Training
training_steps: 10000000
n_eval_episodes: 10

# Exploration
warmup_steps: 100000
epsilon_start: 1.0
epsilon_end: 0.1
anneal_steps: 1000000

# Replay Buffer
buffer_size: 200000
batch_size: 32

# Learning
gamma: 0.99
lr: 0.00025
update_every: 25000
target_update_every: 10000

# DQN Variants
double_dqn: False    # Enable Double DQN
dueling: False       # Enable Dueling DQN
clip_rewards: False  # Clip rewards to [-1, 1]

Monitoring Training

View training progress with TensorBoard:

tensorboard --logdir runs/dqn_YYYY-MM-DD_HHhMMmSSs/

Testing a Trained Agent

Watch your trained agent play:

python -m dqn_ale_spaceinvaders.enjoy --artifact runs/dqn_YYYY-MM-DD_HHhMMmSSs/final_model.pt --num-episodes 5

Exporting to Hugging Face Hub

Share your trained model:

python -m export \
    --username YOUR_HF_USERNAME \
    --repo-name dqn-spaceinvaders \
    --artifact-path runs/dqn_YYYY-MM-DD_HHhMMmSSs/final_model.pt \
    --movie-fps 12

This will:

  • Create a repository on Hugging Face Hub.
  • Upload the model weights, configuration, and evaluation results.
  • Generate and upload a replay movie.
  • Create a model card with usage instructions.

Algorithm Details

DQN Architecture

The network consists of:

  • 3 convolutional layers for feature extraction.
  • 2 fully connected layers for Q-value estimation.
  • Input: 4 stacked 84×84 grayscale frames.
  • Output: Q-values for each action.

Dueling DQN Architecture

Separates state value and action advantages:

  • Shared convolutional backbone.
  • Value stream: estimates state value V(s).
  • Advantage stream: estimates action advantages A(s,a).
  • Q(s,a) = V(s) + (A(s,a) - mean(A(s,a))).

Training Process

  1. Warmup: Random exploration for initial experiences.
  2. Epsilon Annealing: Gradual reduction from exploration to exploitation.
  3. Experience Replay: Sample random mini-batches from replay buffer.
  4. Target Network: Separate network updated periodically for stability.
  5. Double DQN (optional): Reduces overestimation by decoupling action selection and evaluation.

License

This project is available under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dqn_ale_spaceinvaders-0.1.1.tar.gz (11.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dqn_ale_spaceinvaders-0.1.1-py3-none-any.whl (15.4 kB view details)

Uploaded Python 3

File details

Details for the file dqn_ale_spaceinvaders-0.1.1.tar.gz.

File metadata

  • Download URL: dqn_ale_spaceinvaders-0.1.1.tar.gz
  • Upload date:
  • Size: 11.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.14.0 Darwin/25.1.0

File hashes

Hashes for dqn_ale_spaceinvaders-0.1.1.tar.gz
Algorithm Hash digest
SHA256 54882ef06102960815bcefade9d8fdf67445bdcf4b8575fa59d38fd230037a89
MD5 06eb262a538a13ce336223c78dc1b7e2
BLAKE2b-256 ddab6a56860f1229e180985576ef689408e7650e1b3dbc151ed234b7ae74221b

See more details on using hashes here.

File details

Details for the file dqn_ale_spaceinvaders-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for dqn_ale_spaceinvaders-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 43be433e772cf989c96f8661404fadf8f72a6f1a0f700224bbe1e3e01c6db459
MD5 ec0530da3a8db3d9dbc847f84a95db8f
BLAKE2b-256 2c663dc1c71781288042279bc75bcbe5e6eb4d73b8462a709379c2f5611d834c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page