Skip to main content

A highly customizable Genius Invokation TCG Simulator for AI training

Project description

Dottore Genius Invokation TCG Simulator

PyPI Version Python 3.10 Coverage Status license

A Genshin Impact Genius Invokation TCG simulator intended to be used for AI training.

This package aims to help programmers code things based on Genius Invokation TCG with ease. e.g. AI, desktop application, website...

The simulator is modeled as a finite state machine, where all game states are immutable. Optimizations are done to make sure immutability doesn't impact performance.

Basic rules of Genius Invokation TCG can be found on Fandom.

Installation

Please make sure your Python version >= 3.10 before installing.

pip install dgisim

Simple Start With CLI

Run Locally

Once installed, you may start by trying the CLI to play the game first.

You might want to run a simple python program like this:

from dgisim import CLISession

session = CLISession()
session.run()

Run Remotely

You may try the CLI online on Google Colab

CLI Simple Usages

See CLI's README for showcase and explanations of the CLI.

Customize Player Agents (Important For AI Or Building App)

A player agent controls all actions of a player in a game.

To implement a player agent, all you need to do is to inherit the abstact class PlayerAgent and implement the method choose_action().

A simple example is shown below, the agent implemented choose 3 random cards to replace during Card Select Phase, and normal attacks until there's no dices for it during Action Phase.

class ExampleAgent(PlayerAgent):
    def choose_action(self, history: list[GameState], pid: Pid) -> PlayerAction:
        latest_game_state: GameState = history[-1]
        game_mode: Mode = latest_game_state.get_mode()
        curr_phase: Phase = latest_game_state.get_phase()

        if isinstance(curr_phase, game_mode.card_select_phase):
            cards_to_select_from: Cards = latest_game_state.get_player(pid).get_hand_cards()
            _, selected_cards = cards_to_select_from.pick_random_cards(num=3)
            return CardsSelectAction(selected_cards=selected_cards)

        elif isinstance(curr_phase, game_mode.action_phase):
            me: PlayerState = latest_game_state.get_player(pid)
            active_character: Character = me.just_get_active_character()
            dices: ActualDices = me.get_dices()
            # check if dices are enough for normal attack
            normal_attack_cost = active_character.skill_cost(CharacterSkill.NORMAL_ATTACK)
            dices_to_use = dices.basically_satisfy(normal_attack_cost)
            if dices_to_use is not None:
                # normal attack if dices can be found to pay for normal attack
                return SkillAction(
                    skill=CharacterSkill.NORMAL_ATTACK,
                    instruction=DiceOnlyInstruction(dices=dices_to_use),
                )
            return EndRoundAction()  # end round otherwise

        else:
            raise NotImplementedError(f"actions for {curr_phase} not defined yet")

The above example manually tests if there are dices for some action, which is straightforward but takes time to exhaust all options. So the GameState can return an ActionGenerator object which automatically provides you with all valid actions to choose from. More about ActionGenerator will be updated later.

You can find more examples of implementations of PlayerAgent in dgisim/src/agents.py. The RandomAgent in agents.py is implemented based on ActionGenerator mentioned above to make random but valid decision.

Once you defined your own player agent, you can test it against the RandomAgent.

# generates a random initial game state with random decks
init_game_state = GameState.from_default()
# forms a `game`; YourCustomAgent is Player 1, RandomAgent is Player 2
game_state_machine = GameStateMachine(init_game_state, YourCustomAgent(), RandomAgent())
# runs the game and prints who wins
game_state_machine.run()
# gets full history of the game
history: tuple[GameState, ...] = game_state_machine.get_history()
# gets only history of game states that are right before a player action
act_history: tuple[GameState, ...] = game_state_machine.get_action_history()
# any GameState can be printed with nice formatting directly
print(history[-1])

Features

This simulator is modeled as a finite state machine, which means any intermediate state can be standalone and be used to proceed to other states.

The GameState class represents some game state in the state machine. It uses passed in Phase object to determine how to transform to another state, which means the game flow is highly customizable. (Default Mode and some Heated Battle Modes are implemented already)

Everything in the GameState object are immutable, so traversing game history and exploring different branches of possibilities in the future are not error-prone. The simulator did optimizations for immutability. The unchanged data are shared among neighbouring game states.

GameState implements __eq__ and __hash__, enabling you to use any game state as a key in a dictionary, and discover game states on different 'game branches' being actually the same.

An ActionGenerator can be returned by any valid GameState to help generate valid player actions.

Development Milestones

Currently a full game can be played with any combination of the characters and cards implemented.

  • Implement all game phases (Action Phase, End Phase...)
  • Implement all cards (59/200 implemented) (details)
  • Implement all characters with their talent cards (16/54 implemented) (details)
  • Implement all reactions, death handling, revival handling etc.
  • Implement all game logics to support the implemented cards and characters
  • Implement interactive CLI for better debugging experience
  • Ensure 99% unittest coverage checking behaviour of characters and cards
  • Implement lazy player agent for minimal testing purposes
  • Implement random player agent for testing purposes
  • Implement player action validity checker
  • Implement player action choices provider

Future Plans

I have the plan to implement a simple cross-platform GUI interface for the simulator. But that will be in a separate repo.

Once this project is done, I'll be reading relative papers and develop an AI for this game. The AI is supposed to be used for learning strategies and making decks, but not against another player directly.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dgisim-0.3.1.tar.gz (94.5 kB view details)

Uploaded Source

Built Distribution

dgisim-0.3.1-py3-none-any.whl (113.7 kB view details)

Uploaded Python 3

File details

Details for the file dgisim-0.3.1.tar.gz.

File metadata

  • Download URL: dgisim-0.3.1.tar.gz
  • Upload date:
  • Size: 94.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.11

File hashes

Hashes for dgisim-0.3.1.tar.gz
Algorithm Hash digest
SHA256 d6d8f9285a7f271f5969c3393c8c8c649f3fec86e90651975dd3d6cff121e522
MD5 9c96891a5a398dbfae0dfe5a250397b6
BLAKE2b-256 084b765d0ae13032151d0539d8f474ffca6fc0dd7342c0b28d87f0c5d3d9ad8c

See more details on using hashes here.

File details

Details for the file dgisim-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: dgisim-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 113.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.11

File hashes

Hashes for dgisim-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b76e18471e88be8c496b36e8bb1f74e46add7c05ad938346630eb03b84fb2b56
MD5 fecf18f39815a39444df44ee2e37b7f7
BLAKE2b-256 2ded589f95c9d528c885134ba895540e4d3349e91afb6637411ad46a67897886

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page