A fully configurable Gymnasium compatible Tetris environment
Project description
Tetris Gymnasium
Tetris Gymnasium is a state-of-the-art, modular Reinforcement Learning (RL) environment for Tetris, tightly integrated with OpenAI's Gymnasium.
Quick Start
Getting started with Tetris Gymnasium is straightforward. Here's an example to run an environment with random actions:
import cv2
import gymnasium as gym
from tetris_gymnasium.envs.tetris import Tetris
if __name__ == "__main__":
env = gym.make("tetris_gymnasium/Tetris", render_mode="human")
env.reset(seed=42)
terminated = False
while not terminated:
env.render()
action = env.action_space.sample()
observation, reward, terminated, truncated, info = env.step(action)
key = cv2.waitKey(100) # timeout to see the movement
print("Game Over!")
For more examples, e.g. training a DQN agent, please refer to the examples directory.
Installation
Tetris Gymnasium can be installed via pip:
pip install tetris-gymnasium
Why Tetris Gymnasium?
While significant progress has been made in RL for many Atari games, Tetris remains a challenging problem for AI, similar to games like Pitfall. Its combination of NP-hard complexity, stochastic elements, and the need for long-term planning makes it a persistent open problem in RL research. Tetris's intuitive gameplay and relatively modest computational requirements position it as a potentially useful environment for developing and evaluating RL approaches in a demanding setting.
Tetris Gymnasium aims to provide researchers and developers with a tool to address this challenge:
- Modularity: The environment's architecture allows for customization and extension, facilitating exploration of various RL techniques.
- Clarity: Comprehensive documentation and a structured codebase are designed to enhance accessibility and support experimentation.
- Adjustability: Configuration options enable researchers to focus on specific aspects of the Tetris challenge as needed.
- Up-to-date: Built on the current Gymnasium framework, the environment is compatible with contemporary RL algorithms and tools.
- Feature-rich: Includes game-specific features that are sometimes absent in other Tetris environments, aiming to provide a more comprehensive representation of the game's challenges.
These attributes make Tetris Gymnasium a potentially useful resource for both educational purposes and RL research. By providing a standardized yet adaptable platform for approaching one of RL's ongoing challenges, Tetris Gymnasium may contribute to further exploration and development in Tetris RL.
Documentation
For detailed information on using and customizing Tetris Gymnasium, please refer to our full documentation.
Background
Tetris Gymnasium addresses the limitations of existing Tetris environments by offering a modular, understandable, and adjustable platform. Our paper, "Piece by Piece: Assembling a Modular Reinforcement Learning Environment for Tetris," provides an in-depth look at the motivations and design principles behind this project.
Abstract:
The game of Tetris is an open challenge in machine learning and especially Reinforcement Learning (RL). Despite its popularity, contemporary environments for the game lack key qualities, such as a clear documentation, an up-to-date codebase or game related features. This work introduces Tetris Gymnasium, a modern RL environment built with Gymnasium, that aims to address these problems by being modular, understandable and adjustable. To evaluate Tetris Gymnasium on these qualities, a Deep Q Learning agent was trained and compared to a baseline environment, and it was found that it fulfills all requirements of a feature-complete RL environment while being adjustable to many different requirements. The source-code and documentation is available at on GitHub and can be used for free under the MIT license.
Read the full paper: Preprint on EasyChair
Citation
If you use Tetris Gymnasium in your research, please cite our work:
@booklet{EasyChair:13437,
author = {Maximilian Weichart and Philipp Hartl},
title = {Piece by Piece: Assembling a Modular Reinforcement Learning Environment for Tetris},
howpublished = {EasyChair Preprint 13437},
year = {EasyChair, 2024}}
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgements
We extend our gratitude to the creators and maintainers of Gymnasium, CleanRL, and Tetris-deep-Q-learning-pytorch for providing powerful frameworks and reference implementations that have contributed to the development of Tetris Gymnasium.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file tetris_gymnasium-0.2.1.tar.gz
.
File metadata
- Download URL: tetris_gymnasium-0.2.1.tar.gz
- Upload date:
- Size: 21.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.12.1 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5243a2cb38ed7cb41356b717a001646d431ed5aadf134dff05e7a4197bce619e |
|
MD5 | 8135e2723c93643aa9b6789426dfa835 |
|
BLAKE2b-256 | f783e8821db07af58b6de61644f97b5ec4065c2843c4f7cec31e06da30dafc21 |
File details
Details for the file tetris_gymnasium-0.2.1-py3-none-any.whl
.
File metadata
- Download URL: tetris_gymnasium-0.2.1-py3-none-any.whl
- Upload date:
- Size: 24.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.12.1 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e7a154c58cdc14abd02b3f97dbf2e3806c362d12bbfc6e18c078b7bad55c0dc6 |
|
MD5 | 21805a741a3652bef37c31ecc9f1e92a |
|
BLAKE2b-256 | 45c90b9314c4b2535c4dfb8eb02c6f2affd239ef4ce4937745eae1a9583fa29d |