MineRL environment and data loader for reinforcement learning from human demonstration in Minecraft
The MineRL Python Package
Python package providing easy to use gym environments and a simple data api for the MineRLv0 dataset.
minerl in our spare time, please consider supporting us on Patreon <3
With JDK-8 installed run this command
pip3 install --upgrade minerl
Running an environment:
import minerl import gym env = gym.make('MineRLNavigateDense-v0') obs = env.reset() done = False while not done: action = env.action_space.sample() # One can also take a no_op action with # action =env.action_space.noop() obs, reward, done, info = env.step( action)
Sampling the dataset:
import minerl # YOU ONLY NEED TO DO THIS ONCE! minerl.data.download('/your/local/path') data = minerl.data.make( 'MineRLObtainDiamond-v0', data_dir='/your/local/path') # Iterate through a single epoch gathering sequences of at most 32 steps for current_state, action, reward, next_state, done \ in data.sarsd_iter( num_epochs=1, max_sequence_len=32): # Print the POV @ the first step of the sequence print(current_state['pov']) # Print the final reward pf the sequence! print(reward[-1]) # Check if final (next_state) is terminal. print(done[-1]) # ... do something with the data. print("At the end of trajectories the length" "can be < max_sequence_len", len(reward))
Visualizing the dataset:
# Make sure your MINERL_DATA_ROOT is set! export MINERL_DATA_ROOT='/your/local/path' # Visualizes a random trajectory of MineRLObtainDiamondDense-v0 python3 -m minerl.viewer MineRLObtainDiamondDense-v0
If you're here for the MineRL competition. Please check the main competition website here.
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size minerl-0.3.6.tar.gz (36.3 MB)||File type Source||Python version None||Upload date||Hashes View|