Machine Learning Professional
Project description
MLPro - Machine Learning Professional
Machine Learning Professional - A Synoptic Framework for Standardized Machine Learning Tasks in Python
MLPro was developed in 2021 by Automation Technology and Learning Systems team at Fachhochschule Südwestfalen
MLPro provides complete, standardized, and reusable functionalities to support your scientific research, educational task, or industrial project in machine learning.
In the first version of MLPro, we provide a standardized Python package for reinforcement learning (RL) and game theoretical (GT) approaches, including environments, algorithms, multi-agent RL (MARL), model-based RL (MBRL) and many more. Additionally, we incorporate the available third party packages by developing wrapper classes to enable our users to reuse the third party packages in MLPro.
Main Features
- Test-driven development (CI/CD concept)
- Clean code and constructed through Object-Oriented Programming
- Ready-to-use functionalities
- Usability in scientific, industrial and educational contexts
- Extensible, maintainable, understandable
- Attractive UI support (available soon)
- Reuse of available state-of-the-art implementations
- Clear documentations
Documentation
The Documentation is available on : https://mlpro.readthedocs.io/
Installation
Prerequisites
MLPro requires Python 3.7+
pip install mlpro
Example
This examples shows how to train CartPole-V1 with Stable-Baselines3 wrapper
import gym
from stable_baselines3 import PPO
from mlpro.rl.models import *
from mlpro.wrappers.openai_gym import WrEnvGYM2MLPro
from mlpro.wrappers.sb3 import WrPolicySB32MLPro
class MyScenario(RLScenario):
C_NAME = 'Matrix'
def _setup(self, p_mode, p_ada, p_logging):
gym_env = gym.make('CartPole-v1')
self._env = WrEnvGYM2MLPro(gym_env, p_logging=p_logging)
policy_sb3 = PPO(
policy="MlpPolicy",
n_steps=5,
env=None,
_init_setup_model=False)
policy_wrapped = WrPolicySB32MLPro(
p_sb3_policy=policy_sb3,
p_observation_space=self._env.get_state_space(),
p_action_space=self._env.get_action_space(),
p_ada=p_ada,
p_logging=p_logging)
return Agent(
p_policy=policy_wrapped,
p_envmodel=None,
p_name='Smith',
p_ada=p_ada,
p_logging=p_logging
)
training = RLTraining(
p_scenario_cls=MyScenario,
p_cycle_limit=1000,
p_max_adaptations=0,
p_max_stagnations=0,
p_visualize=True,
p_logging=Log.C_LOG_ALL )
training.run()
Implemented Wrappers
Features | Status |
---|---|
OpenAI-Gym | :heavy_check_mark: |
Stable-Baselines3 | :heavy_check_mark: |
PettingZoo | :heavy_check_mark: |
Hyperopt | :heavy_check_mark: |
Mainteners
MLPro is currently maintained by Detlef Arend, M Rizky Diprasetya, Steve Yuwono, William Budiatmadjaja
How to contribute
If you want to contribute, please read CONTRIBUTING.md
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.