Skip to main content

A small example package

Project description

Gym-SimplifiedTetris

Report Bug · Request Feature · Suggestions


🟥 Simplified Tetris environments compliant with OpenAI Gym's API

Gym-SimplifiedTetris is a pip installable package that creates simplified Tetris environments compliant with OpenAI Gym's API. Gym's API is the field standard for developing and comparing reinforcement learning algorithms.

There are currently three agents and 64 environments provided. The environments are simplified because the player must select the column and piece's rotation before the piece starts falling vertically downwards. If one looks at the previous approaches to the game of Tetris, most of them use this simplified setting.


1. Installation

The package is pip installable:

pip install gym-simplifiedtetris

Or, you can copy the repository by forking it and then downloading it using:

git clone https://github.com/<YOUR-USERNAME>/gym-simplifiedtetris

Packages can be installed using pip:

cd gym-simplifiedtetris
pip install -r requirements.txt

2. Usage

The file examples/envs.py shows two examples of using an instance of the simplifiedtetris-binary-20x10-4-v0 environment for ten games. You can create an environment using gym.make, supplying the environment's ID as an argument.

import gym
import gym_simplifiedtetris

env = gym.make("simplifiedtetris-binary-20x10-4-v0")
obs = env.reset()

# Run 10 games of Tetris, selecting actions uniformly at random.
episode_num = 0
while episode_num < 10:
    env.render()
    
    action = env.action_space.sample()
    obs, reward, done, info = env.step(action)

    if done:
        print(f"Episode {episode_num + 1} has terminated.")
        episode_num += 1
        obs = env.reset()

env.close()

Alternatively, you can import the environment directly:

from gym_simplifiedtetris import SimplifiedTetrisBinaryEnv as Tetris

env = Tetris(grid_dims=(20, 10), piece_size=4)

3. Future work

  • Normalise the observation spaces.
  • Implement an action space that only permits the agent to take non-terminal actions.
  • Implement more shaping rewards: potential-style, potential-based, dynamic potential-based, and non-potential. Optimise their weights using an optimisation algorithm.
  • Write end-to-end and integration tests using pytest.
  • Perform mutation and property-based testing using mutmut and Hypothesis.
  • Use Coverage.py to increase code coverage.

4. Acknowledgements

This package utilises several methods from the codebase developed by andreanlay (2020) and the codebase developed by Benjscho (2021).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gym_simplifiedtetris_AVELA-0.2.1.tar.gz (23.9 kB view details)

Uploaded Source

Built Distribution

gym_simplifiedtetris_AVELA-0.2.1-py3-none-any.whl (31.0 kB view details)

Uploaded Python 3

File details

Details for the file gym_simplifiedtetris_AVELA-0.2.1.tar.gz.

File metadata

File hashes

Hashes for gym_simplifiedtetris_AVELA-0.2.1.tar.gz
Algorithm Hash digest
SHA256 bf90b9bae180322312f2a7e0125650aa4fa5e12d89c8aecde06ea2ba98b2f428
MD5 7f5ee84c4575fed0abc1d86cfe9b9e1a
BLAKE2b-256 5ea17468f2c456009461cdcd17ad27e72fa6b323b0bddded1a1d83ed5eba37d0

See more details on using hashes here.

File details

Details for the file gym_simplifiedtetris_AVELA-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for gym_simplifiedtetris_AVELA-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6435ecd9e5d1c5db1da64d758f0abd3c140494b778fc09ceaa89ba474e46094b
MD5 bbade6117ac09213db14937c5d6898ce
BLAKE2b-256 fd253ce70b96864564a5fbe59b268e7c6640a2cfe132668c8fcc04797854ec57

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page