Skip to main content

A Deep Q-Network (DQN) implementation for Atari Space Invaders using Gymnasium and PyTorch.

Project description

Python PyPI License

Deep Q-Network (DQN) for Atari Space Invaders

A PyTorch implementation of Deep Q-Learning Network (DQN) trained to play Atari Space Invaders using the Arcade Learning Environment (ALE).

Features

  • Vanilla DQN and Dueling DQN architectures.
  • Double DQN support for improved stability.
  • Replay Buffer for experience replay.
  • Epsilon-greedy exploration with annealing (i.e., linear decay).
  • TensorBoard integration for training visualisation.
  • Hugging Face Hub integration for model sharing.
  • Video recording of agent gameplay.

Requirements

Install via pip:

pip install dqn-ale-spaceinvaders

Or clone the repository and install dependencies:

git clone https://github.com/giansimone/dqn-ale-spaceinvaders.git
cd dqn-ale-spaceinvaders
poetry install

Project Structure

dqn-ale-spaceinvaders/
├── agent.py           # DQN agent implementation
├── buffer.py          # Experience replay buffer
├── config.yaml        # Agent configuration
├── environment.py     # Environment setup and wrappers
├── model.py           # Deep learning architectures
├── train.py           # Training script
├── enjoy.py           # Play with trained agent
├── export.py          # Export model to Hugging Face Hub
└── utils.py           # Utility functions

Quick Start

Training

Train a DQN agent with the default configuration:

python -m train

The training script will:

  • Create a timestamped run directory in runs/.
  • Save the configuration, checkpoints, and TensorBoard logs.
  • Periodically evaluate the agent and save the best model.

Configuration

Edit config.yaml to customize training parameters:

# Environment
env_id: ALE/SpaceInvaders-v5
frame_skip: 5
frame_stack: 4
resized_frame: 84

# Training
training_steps: 10000000
n_eval_episodes: 10

# Exploration
warmup_steps: 100000
epsilon_start: 1.0
epsilon_end: 0.1
anneal_steps: 1000000

# Replay Buffer
buffer_size: 200000
batch_size: 32

# Learning
gamma: 0.99
lr: 0.00025
update_every: 25000
target_update_every: 10000

# DQN Variants
double_dqn: False    # Enable Double DQN
dueling: False       # Enable Dueling DQN
clip_rewards: False  # Clip rewards to [-1, 1]

Monitoring Training

View training progress with TensorBoard:

tensorboard --logdir runs/dqn_YYYY-MM-DD_HHhMMmSSs/

Testing a Trained Agent

Watch your trained agent play:

python -m enjoy --artifact runs/dqn_YYYY-MM-DD_HHhMMmSSs/final_model.pt --num-episodes 5

Exporting to Hugging Face Hub

Share your trained model:

python -m export \
    --username YOUR_HF_USERNAME \
    --repo-name dqn-spaceinvaders \
    --artifact-path runs/dqn_YYYY-MM-DD_HHhMMmSSs/final_model.pt \
    --movie-fps 12

This will:

  • Create a repository on Hugging Face Hub.
  • Upload the model weights, configuration, and evaluation results.
  • Generate and upload a replay movie.
  • Create a model card with usage instructions.

Algorithm Details

DQN Architecture

The network consists of:

  • 3 convolutional layers for feature extraction.
  • 2 fully connected layers for Q-value estimation.
  • Input: 4 stacked 84×84 grayscale frames.
  • Output: Q-values for each action.

Dueling DQN Architecture

Separates state value and action advantages:

  • Shared convolutional backbone.
  • Value stream: estimates state value V(s).
  • Advantage stream: estimates action advantages A(s,a).
  • Q(s,a) = V(s) + (A(s,a) - mean(A(s,a))).

Training Process

  1. Warmup: Random exploration for initial experiences.
  2. Epsilon Annealing: Gradual reduction from exploration to exploitation.
  3. Experience Replay: Sample random mini-batches from replay buffer.
  4. Target Network: Separate network updated periodically for stability.
  5. Double DQN (optional): Reduces overestimation by decoupling action selection and evaluation.

Advanced Usage

Loading a Trained Model

import torch
from pathlib import Path
from utils import load_artifact

# Load model
config, env, agent = load_artifact(
    Path("runs/dqn_YYYY-MM-DD_HHhMMmSSs/final_model.pt"),
    render_mode="human"
)

# Use the agent
state, _ = env.reset()
action = agent.act(state, epsilon=0.0)

License

This project is available under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dqn_ale_spaceinvaders-0.1.0.tar.gz (12.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dqn_ale_spaceinvaders-0.1.0-py3-none-any.whl (14.3 kB view details)

Uploaded Python 3

File details

Details for the file dqn_ale_spaceinvaders-0.1.0.tar.gz.

File metadata

  • Download URL: dqn_ale_spaceinvaders-0.1.0.tar.gz
  • Upload date:
  • Size: 12.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.13.7 Darwin/25.0.0

File hashes

Hashes for dqn_ale_spaceinvaders-0.1.0.tar.gz
Algorithm Hash digest
SHA256 1aefedf69658b97527d8285279fb5cc0269e44ef701e30378f3c198036804f2e
MD5 e65476ee35ff428c761d08f7ec3b2570
BLAKE2b-256 0bc2dd2467ae4007d74beb87f8793e766b2c20efed9f574e3a1552ca4a6380a7

See more details on using hashes here.

File details

Details for the file dqn_ale_spaceinvaders-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for dqn_ale_spaceinvaders-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 409599770b88f6bf27293028e711da94ff4f2b0087b6160c491c15d5d4382d02
MD5 7e543becb5c3e33becaa36310e6ec744
BLAKE2b-256 6225110233c2ae66fe1a97b078436f776e17c8c0146156aa4c834fab917623eb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page