Python package that helps to quickly implement MCTS to solve reinforcement learning problems.
Project description
mcts-simple
mcts-simple is a Python3 library that allows reinforcement learning problems to be solved easily with its implementations of Monte Carlo Tree Search.
Monte Carlo Tree Search (MCTS)
MCTS attempts to identify the most promising moves at each state by choosing random actions at that state for every episode (playouts/rollouts). The final game result of each episode is then used to determine the weight of all nodes traversed during that episode so that the probability of choosing an action that yields higher current and potential rewards is increased.
There are 4 stages to the MCTS:
- Selection
- Traverse through the search tree from the root node to a leaf node, while only selecting the most promising child nodes. Leaf node in this case refers to a node that has not yet gone through the expansion stage, rather than its traditional definition which is "a node without child nodes".
- Expansion
- If the leaf node does not lead to an outcome to the episode (e.g. win/lose/draw), create at least one child node for that leaf node and choose one child node from those created.
- Simulation
- Complete one episode starting from the chosen child node, where random actions are chosen for future states. An episode is only completed when an outcome can be yielded from it.
- Backpropagation
- The outcome yielded from the simulated episode in stage 3 should be used to update information in traversed nodes.
Note:
- We assume that states are unique.
- Root node's score is almost never evaluated, and at most only the number of visits "n" is used.
Upper Confidence bounds applied to Trees (UCT)
UCT, a variation of MCTS, is often used instead of vanilla MCTS for a few reasons, mainly:
- MCTS emphasizes entirely on exploitation. On the other hand, UCT is able to balance exploration and exploitation.
- MCTS may favour a losing move despite the presence of one or few forced refutations. UCT attempts to deal with this limitation of the original MCTS.
UCT uses the UCB1 formula to evaluate actions at each state. The exploration parameter c in the UCB1 formula is theoretically equal to sqrt(2), but it can be changed to fit your needs.
How to use mcts-simple
mcts-simple only supports python 3.7 and above.
Dependencies
mcts-simple requires the following libraries:
- json-pickle
- tqdm
User installation
In cmd,
pip install mcts-simple
In your python file,
from mcts_simple import *
Creating your own game environment
For the progress bar to work best, use Jupyter Notebook or another platform that supports carriage return "/r".
Create a class for your game by inheriting the Game class from mcts-simple, and define the following methods for your class:
Method | What it does |
---|---|
__init__(self) | Initialises the object. |
render(self) | Returns a visual representation of the current state of the game. |
get_state(self) | Returns current state of the game. Note:
|
number_of_players(self) | Returns number of players. |
current_player(self) | Returns the player that is taking an action this turn. |
possible_actions(self) | Returns the actions that can be taken this turn. |
take_action(self, action) | Player takes action. It is best to check if action is in possible actions (see source code). Action should be string type to support the play_with_human() method from MCTS. Note that even if action leads to the game ending, next player should still be chosen. |
delete_last_action(self) | Last action is removed. Current state is reverted back to previous state. |
has_outcome(self) | Returns True if game has ended. Returns False if game is still ongoing. |
winner(self) | Returns None if game is a draw. Returns the winner if one of the players won. It is best to check if outcome is defined. |
After creating your environment, you're basically done! You can train and export your MCTS with just 3 lines of code (assuming your game environment class is named "YourGame":
mcts = MCTS(YourGame())
mcts.run(iterations = 50000)
mcts._export("mcts.json")
You can import your trained MCTS, with another 3 lines of code:
mcts = MCTS(TicTacToe())
mcts._import("mcts.json")
mcts.self_play(activation = "best")
If you have any issues in creating your environment, you can browse the source code or check out the examples provided here.
Contributions
I appreciate if you are able to contribute to this project, since currently I am the only one maintaining this module. This is also the first public Python package that I have written, so if you think that something is wrong with my code, you can open an issue and I'll try my best to resolve it!
There are also other variants of MCTS, so feel free to give some pointers to how they should be implemented.
To Do
- Resolve issue with numpy arrays as state: https://stackoverflow.com/questions/66847901/python-array-issue-with-jsonpickle
- Implement tree for MC-RAVE (Rapid Action Value Estimation for MCTS).
- Implement example with DNN + MCTS (using a specialised evaluation formula) for chess.
- Implement conversion from OpenAI-Gym environment to Game class in mcts-simple.
- Implement alpha-beta pruning.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for mcts_simple-0.0.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9c7f6c0dec7a6dbbaa6bb1b9cd0a8ddd1dee3b9b751d735c095829bb072977e2 |
|
MD5 | 0e2780c7d674d1950e39a0e95f849f7e |
|
BLAKE2b-256 | cb6b3335730469ea36e38a4f8bc34cd4ba8d91399a39dc172d8963fcd0c06b37 |