Skip to main content

Gym Environment for 3D Tic Tac Toe

Project description

gym-tic-tac-toe3D


OpenAI Gym Environment for a Two-Player 3D Tic-Tac-Toe. Github Link: https://github.com/OUStudent/gym_tic_tac_toe3D

Requirements


  • gym
  • Numpy
  • Matplotlib

Install


pip install gym-tic-tac-toe3D

How it Works

Tic Tac Toe is usually played on a 3x3 grid where the objective is for one player to line up their tokens in a straight line of three. This is an extremely easy and trivial game; however, one can extend the difficulty by stacking 3x3 layers to create a 3x3x3 cube. Now the objective is to line up three tokens in any of the directions.

Here are three example games where the action was randomly generated:

Game 1

Game 2

Game 3

How to Use

The environment is a Two-Player Game, where Blue denotes Player 1 and Red denotes Player 2. The state and action
contains all 27 possible positions, 9 for the first layer, 9 for the second, and 9 for the third.

The input has three possible integer values for eac position, -1 for opponent, 0 for empty, and 1 for current player. Note that no matter whom the Player is, these values holds true. The reward is a two value list where the first index represents the reward for Player 1 and the second for Player 2. The reward value should only be used after the game is completed. Players are rewarded for wins and receive extra points for how fast they win; while players are penalized for losing and how fast they lost. In addition, because Player 1 has such a great advantage over Player 2 due to playing first, they are penalized greater if they lose to Player 2 than if Player 2 lost to Player 1; in addition, Player 2 is rewarded greater than Player 1 if they win than if Player 1 won against Player 2. Here are the current rewards:

Turns Taken Player 1 Win: P1 Reward P2 Penalize Player 2 Win: P1 Penalize P2 Reward
<= 3 1 20 -10 2 -20 40
<=5 1 18 -9 2 -18 36
<=7 1 16 -8 2 -16 32
<=9 1 14 -7 2 -14 28
Else 1 10 -5 2 -10 20

For example, after a game where Player 1 has won within three turns, Player 1 is rewarded 20 points while Player 2 penalized 10 points. On the other hand, if Player 2 won within 7 turns then Player 1 is penalized 16 points while Player 2 is rewarded 32 points.

Here is an example on how to create the environment with random agents:

import gym
import gym_tic_tac_toe3D
import matplotlib.pyplot as plt
env = gym.make("tic_tac_toe3D-v0")

games = 3  # best of three
player1_reward = 0
player2_reward = 0
for i in range(0, games):
    state = env.reset()
    done = False
    player = 1
    while not done:
        env.render(player=player)
        plt.pause(0.5)
        while True:
            action = env.action_space.sample()
            # Need to check if action is available in state space
            if state[action] == 0:
                break

        state, reward, done, info = env.step(action, player=player)
        # switch players
        if player == 1:
            player = 2
        else:
            player = 1
    # final render after completion of game to see final move
    env.render(player=player)
    plt.pause(1)
    player1_reward += reward[0]
    player2_reward += reward[1]

Here is another example between two agents named p1 and p2:

import gym
import gym_tic_tac_toe3D
import matplotlib.pyplot as plt
def play(p1, p2, show=False, num_games=3):
    env = gym.make("tic_tac_toe3D-v0")
    player1_reward = 0
    player2_reward = 0
    for i in range(0, num_games):
        state = env.reset()
        done = False
        player = 1
        while not done:
            if show:
                env.render(player=player)
                plt.pause(0.5)
            if player == 1:
                move = p1.predict(state)  # returns softmax of prob's for all 27 possible actions
            else:
                move = p2.predict(state)  # returns softmax of prob's for all 27 possible actions
            # get action states that are emtpy
            viable_moves = np.where(state == 0)[0].tolist()
            # find the empty action with the largest probability
            action = viable_moves[np.argmax(move[0][viable_moves])]
            
            state, reward, done, info = env.step(action, player=player)
            # switch players
            if player == 1:
                player = 2
            else:
                player = 1
        if show:
            env.render(player=player)
            plt.pause(1)
        player1_reward += reward[0]
        player2_reward += reward[1]
    return player1_reward, player2_reward

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gym_tic_tac_toe3D-0.0.3.tar.gz (3.4 kB view details)

Uploaded Source

File details

Details for the file gym_tic_tac_toe3D-0.0.3.tar.gz.

File metadata

  • Download URL: gym_tic_tac_toe3D-0.0.3.tar.gz
  • Upload date:
  • Size: 3.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.0 requests-toolbelt/0.9.1 tqdm/4.54.1 CPython/3.9.1

File hashes

Hashes for gym_tic_tac_toe3D-0.0.3.tar.gz
Algorithm Hash digest
SHA256 01f7c66267e5046c3603efb93f569fcd7c87c4e45ab2c443208c7ec2a9da835e
MD5 5e2d31f89d81eac6609b879038ee2cc3
BLAKE2b-256 329886bd2a633835945e8bee9435f1d145000f74c13b4a30a1743f66dbb95c9e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page