A package for training AI agents to play retro games using natural language
Project description
🎮 Casino of Life
A revolutionary framework for training AI agents in retro fighting games using natural language interactions. Casino of Life combines reinforcement learning with natural language processing to create an intuitive interface for training game-playing AI agents.
🌟 Features
Natural Language Training Interface
- Train AI agents using natural conversations
- Explain strategies in plain English
- Get real-time feedback on training progress
- Interactive chat with CaballoLoko, your AI training assistant
Supported Games
- Mortal Kombat II (Genesis)
- Street Fighter II (Coming Soon)
- More fighting games to be added
Advanced Training Capabilities
- Multiple training strategies (Aggressive, Defensive, Balanced)
- Various learning policies (PPO, A2C, DQN, MLP)
- Custom reward functions
- Save and load training states
- Real-time training visualization
🚀 Quick Start
Installation
pip install casino-of-life
Basic Usage
from casino_of_life.agents import DynamicAgent, CaballoLoko
from casino_of_life.environment import RetroEnv
#Initialize CaballoLoko for training guidance
caballo_loko = CaballoLoko()
response = caballo.chat("Train Liu Kang to be aggressive with fireballs")
#Create environment and agent
env = RetroEnv(
game='MortalKombatII-Genesis',
state='tournament',
players=2
)
agent = DynamicAgent(
env=env,
policy='PPO',
learning_rate=0.0003
)
Start training with natural language guidance
agent.train(
timesteps=100000
)
🛠 Advanced Features
Flexible Reward System
from casino_of_life.reward_evaluators import (
BasicRewardEvaluator,
StageCompleteRewardEvaluator,
MultiObjectiveRewardEvaluator
)
###Create custom reward evaluator
reward_system = MultiObjectiveRewardEvaluator([
BasicRewardEvaluator(health_reward=1.0, damage_penalty=-1.0),
StageCompleteRewardEvaluator(stage_complete_reward=100.0)
])
Environment Configuration
from casino_of_life.environment import RetroEnv
env = RetroEnv(
game='MortalKombatII-Genesis',
state='tournament',
players=2
)
Advanced Training Control
from casino_of_life.agents import DynamicAgent
from casino_of_life.client_bridge import RewardEvaluatorManager
# Initialize reward manager
reward_manager = RewardEvaluatorManager()
reward_manager.register_evaluator("tournament", reward_system)
# Create dynamic agent with custom rewards
agent = DynamicAgent(
env=env,
reward_evaluator=reward_manager.get_evaluator("tournament"),
frame_stack=4,
learning_rate=0.0003
)
🎯 Use Cases
Game Developers
- Test game balance
- Create sophisticated AI opponents
- Generate training data for game testing
AI Researchers
- Experiment with reinforcement learning in complex environments
- Study human-AI interaction through natural language
- Develop and test new training strategies
Gaming Community
- Create custom AI training scenarios
- Share and compare training results
- Contribute to the evolution of game AI and the Casino of Life framework.
🛠 Advanced Features
Flexible Reward System
from casino_of_life.reward_evaluators import (
BasicRewardEvaluator,
StageCompleteRewardEvaluator,
MultiObjectiveRewardEvaluator
)
# Create custom reward evaluator
reward_system = MultiObjectiveRewardEvaluator([
BasicRewardEvaluator(health_reward=1.0, damage_penalty=-1.0),
StageCompleteRewardEvaluator(stage_complete_reward=100.0)
])
Environment Configuration
from casino_of_life.environment import RetroEnv
# Create custom environment with frame stacking
env = RetroEnv(
game='MortalKombatII-Genesis',
state='tournament',
players=2, # Support for multiplayer
)
Advanced Training Control
from casino_of_life.agents import DynamicAgent
from casino_of_life.client_bridge import RewardEvaluatorManager
# Initialize reward manager
reward_manager = RewardEvaluatorManager()
reward_manager.register_evaluator("tournament", reward_system)
# Create dynamic agent with custom rewards
agent = DynamicAgent(
env=env,
reward_evaluator=reward_manager.get_evaluator("tournament"),
frame_stack=4,
learning_rate=0.0003
)
🔧 Technical Details
Environment Features
- Stochastic frame skipping for realistic gameplay
- Configurable observation processing (84x84 grayscale)
- 4-frame stacking for temporal information
- Multi-player support (up to 2 players)
- Automatic garbage collection for memory management
Reward System
- Modular reward evaluators
- Health-based reward calculation
- Stage completion bonuses
- Multi-objective reward combination
- Custom reward scaling
- Real-time reward adjustment
Training Pipeline
- Integration with Stable-Baselines3
- Support for multiple RL algorithms
- Customizable training parameters
- Progress tracking and checkpointing
- Memory-efficient design
📊 Web Interface
Training Dashboard
- Real-time training metrics
- Agent management
- Model versioning
- Interactive chat
- Training configuration
API Integration
from casino_of_life.web import TrainingServer
# Start training server
server = TrainingServer()
server.start()
WebSocket connection for real-time updates
@server.on_message
async def handle_training_request(message):
training_id = await server.start_training(message)
return {"training_id": training_id}
🤝 Contributing
We welcome contributions! See our Contributing Guide for details.
Development Setup
git clone https://github.com/Cimai-Decentralized-Games/casino-of-life.git
cd casino-of-life
pip install -r requirements.txt
📚 Documentation
Full documentation available at https://docs.cimai.biz
🔗 Links
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- G4F for GPT models and other Providers
- Stable-Retro for game emulation
- Stable-Baselines3 for RL implementations
- The fighting game community for inspiration and support
Made with ❤️ by Cimai Decentralized Games
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file casino_of_life-0.2.1.tar.gz.
File metadata
- Download URL: casino_of_life-0.2.1.tar.gz
- Upload date:
- Size: 45.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
746c7b3693a8cb8e6518ec6b61013dd248743f40cd89981ae6aca8ffdf89138e
|
|
| MD5 |
7dcd6b0e03fcee9dbb9d651e85a51fca
|
|
| BLAKE2b-256 |
3a5e05bac721869990c823b68ed0406b7bc3dd3bd4648066c74ba6349c9c1352
|
File details
Details for the file casino_of_life-0.2.1-py3-none-any.whl.
File metadata
- Download URL: casino_of_life-0.2.1-py3-none-any.whl
- Upload date:
- Size: 57.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e372314c7d2d0c45adbf33bdccf861bb7d7f2c94cdccc7004e75197ee1181e16
|
|
| MD5 |
60829581751ab29a978e79f2ba052341
|
|
| BLAKE2b-256 |
fab8c9c56fbb98029a43b702e13914be6eb9abdc3a30759c3b1805cc43993003
|