Skip to main content

A package for training AI agents to play retro games using natural language

Project description

🎮 Casino of Life

A revolutionary framework for training AI agents in retro fighting games using natural language interactions. Casino of Life combines reinforcement learning with natural language processing to create an intuitive interface for training game-playing AI agents.

🌟 Features

Natural Language Training Interface

  • Train AI agents using natural conversations
  • Explain strategies in plain English
  • Get real-time feedback on training progress
  • Interactive chat with CaballoLoko, your AI training assistant

Supported Games

  • Mortal Kombat II (Genesis)
  • Street Fighter II (Coming Soon)
  • More fighting games to be added

Advanced Training Capabilities

  • Multiple training strategies (Aggressive, Defensive, Balanced)
  • Various learning policies (PPO, A2C, DQN, MLP)
  • Custom reward functions
  • Save and load training states
  • Real-time training visualization

🚀 Quick Start

Installation

pip install casino-of-life

Basic Usage

from casino_of_life.agents import DynamicAgent, CaballoLoko
from casino_of_life.environment import RetroEnv

#Initialize CaballoLoko for training guidance
caballo_loko = CaballoLoko()
response = caballo.chat("Train Liu Kang to be aggressive with fireballs")

#Create environment and agent

env = RetroEnv(
game='MortalKombatII-Genesis',
state='tournament',
players=2
)
agent = DynamicAgent(
env=env,
policy='PPO',
learning_rate=0.0003
)

Start training with natural language guidance

agent.train(
    timesteps=100000
)

🛠 Advanced Features

Flexible Reward System

from casino_of_life.reward_evaluators import (
BasicRewardEvaluator,
StageCompleteRewardEvaluator,
MultiObjectiveRewardEvaluator
)

###Create custom reward evaluator

reward_system = MultiObjectiveRewardEvaluator([
BasicRewardEvaluator(health_reward=1.0, damage_penalty=-1.0),
StageCompleteRewardEvaluator(stage_complete_reward=100.0)
])

Environment Configuration

from casino_of_life.environment import RetroEnv
env = RetroEnv(
game='MortalKombatII-Genesis',
state='tournament',
players=2
)

Advanced Training Control

from casino_of_life.agents import DynamicAgent
from casino_of_life.client_bridge import RewardEvaluatorManager

# Initialize reward manager
reward_manager = RewardEvaluatorManager()
reward_manager.register_evaluator("tournament", reward_system)

# Create dynamic agent with custom rewards
agent = DynamicAgent(
    env=env,
    reward_evaluator=reward_manager.get_evaluator("tournament"),
    frame_stack=4,
    learning_rate=0.0003
)

🎯 Use Cases

Game Developers

  • Test game balance
  • Create sophisticated AI opponents
  • Generate training data for game testing

AI Researchers

  • Experiment with reinforcement learning in complex environments
  • Study human-AI interaction through natural language
  • Develop and test new training strategies

Gaming Community

  • Create custom AI training scenarios
  • Share and compare training results
  • Contribute to the evolution of game AI and the Casino of Life framework.

🛠 Advanced Features

Flexible Reward System

from casino_of_life.reward_evaluators import (
    BasicRewardEvaluator,
    StageCompleteRewardEvaluator,
    MultiObjectiveRewardEvaluator
)

# Create custom reward evaluator
reward_system = MultiObjectiveRewardEvaluator([
    BasicRewardEvaluator(health_reward=1.0, damage_penalty=-1.0),
    StageCompleteRewardEvaluator(stage_complete_reward=100.0)
])

Environment Configuration

from casino_of_life.environment import RetroEnv

# Create custom environment with frame stacking
env = RetroEnv(
    game='MortalKombatII-Genesis',
    state='tournament',
    players=2,  # Support for multiplayer
)

Advanced Training Control

from casino_of_life.agents import DynamicAgent
from casino_of_life.client_bridge import RewardEvaluatorManager

# Initialize reward manager
reward_manager = RewardEvaluatorManager()
reward_manager.register_evaluator("tournament", reward_system)

# Create dynamic agent with custom rewards
agent = DynamicAgent(
    env=env,
    reward_evaluator=reward_manager.get_evaluator("tournament"),
    frame_stack=4,
    learning_rate=0.0003
)

🔧 Technical Details

Environment Features

  • Stochastic frame skipping for realistic gameplay
  • Configurable observation processing (84x84 grayscale)
  • 4-frame stacking for temporal information
  • Multi-player support (up to 2 players)
  • Automatic garbage collection for memory management

Reward System

  • Modular reward evaluators
  • Health-based reward calculation
  • Stage completion bonuses
  • Multi-objective reward combination
  • Custom reward scaling
  • Real-time reward adjustment

Training Pipeline

  • Integration with Stable-Baselines3
  • Support for multiple RL algorithms
  • Customizable training parameters
  • Progress tracking and checkpointing
  • Memory-efficient design

📊 Web Interface

Training Dashboard

  • Real-time training metrics
  • Agent management
  • Model versioning
  • Interactive chat
  • Training configuration

API Integration

from casino_of_life.web import TrainingServer

# Start training server
server = TrainingServer()
server.start()

WebSocket connection for real-time updates

@server.on_message
async def handle_training_request(message):
    training_id = await server.start_training(message)
    return {"training_id": training_id}

🤝 Contributing

We welcome contributions! See our Contributing Guide for details.

Development Setup

git clone https://github.com/Cimai-Decentralized-Games/casino-of-life.git
cd casino-of-life
pip install -r requirements.txt

📚 Documentation

Full documentation available at https://docs.cimai.biz

🔗 Links

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • G4F for GPT models and other Providers
  • Stable-Retro for game emulation
  • Stable-Baselines3 for RL implementations
  • The fighting game community for inspiration and support

Made with ❤️ by Cimai Decentralized Games

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

casino_of_life-0.2.2.tar.gz (36.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

casino_of_life-0.2.2-py3-none-any.whl (43.5 kB view details)

Uploaded Python 3

File details

Details for the file casino_of_life-0.2.2.tar.gz.

File metadata

  • Download URL: casino_of_life-0.2.2.tar.gz
  • Upload date:
  • Size: 36.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for casino_of_life-0.2.2.tar.gz
Algorithm Hash digest
SHA256 5a45de88107c5b32839baf733686b7e7ae2db36ef3c7dd448250c9d5392d6850
MD5 a9ccb1f8d0c1c63ec8fa7116edeed048
BLAKE2b-256 969da2b98e732b9cf503e957e65ff61321c698fb218a0a6a27505315aadc228b

See more details on using hashes here.

File details

Details for the file casino_of_life-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: casino_of_life-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 43.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for casino_of_life-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 bd0014c4e900a835ee70f94be8d9ce415d8c0bf3d3500eec3d47f7bb7b276a78
MD5 afcfa287480b37fe2bf58e938e9cb04b
BLAKE2b-256 02ed46af9ada52bd96bfdfbbbf33d3883b07871330c087710f67d14f77e74d4b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page