Skip to main content

A package for training AI agents to play retro games using natural language

Project description

🎮 Casino of Life

A revolutionary framework for training AI agents in retro fighting games using natural language interactions. Casino of Life combines reinforcement learning with natural language processing to create an intuitive interface for training game-playing AI agents.

🌟 Features

Natural Language Training Interface

  • Train AI agents using natural conversations
  • Explain strategies in plain English
  • Get real-time feedback on training progress
  • Interactive chat with CaballoLoko, your AI training assistant

Supported Games

  • Mortal Kombat II (Genesis)
  • Street Fighter II (Coming Soon)
  • More fighting games to be added

Advanced Training Capabilities

  • Multiple training strategies (Aggressive, Defensive, Balanced)
  • Various learning policies (PPO, A2C, DQN, MLP)
  • Custom reward functions
  • Save and load training states
  • Real-time training visualization

🚀 Quick Start

Installation

pip install casino-of-life

Basic Usage

from casino_of_life.agents import DynamicAgent, CaballoLoko
from casino_of_life.environment import RetroEnv

#Initialize CaballoLoko for training guidance
caballo_loko = CaballoLoko()
response = caballo.chat("Train Liu Kang to be aggressive with fireballs")

#Create environment and agent

env = RetroEnv(
game='MortalKombatII-Genesis',
state='tournament',
players=2
)
agent = DynamicAgent(
env=env,
policy='PPO',
learning_rate=0.0003
)

Start training with natural language guidance

agent.train(
    timesteps=100000
)

🛠 Advanced Features

Flexible Reward System

from casino_of_life.reward_evaluators import (
BasicRewardEvaluator,
StageCompleteRewardEvaluator,
MultiObjectiveRewardEvaluator
)

###Create custom reward evaluator

reward_system = MultiObjectiveRewardEvaluator([
BasicRewardEvaluator(health_reward=1.0, damage_penalty=-1.0),
StageCompleteRewardEvaluator(stage_complete_reward=100.0)
])

Environment Configuration

from casino_of_life.environment import RetroEnv
env = RetroEnv(
game='MortalKombatII-Genesis',
state='tournament',
players=2
)

Advanced Training Control

from casino_of_life.agents import DynamicAgent
from casino_of_life.client_bridge import RewardEvaluatorManager

# Initialize reward manager
reward_manager = RewardEvaluatorManager()
reward_manager.register_evaluator("tournament", reward_system)

# Create dynamic agent with custom rewards
agent = DynamicAgent(
    env=env,
    reward_evaluator=reward_manager.get_evaluator("tournament"),
    frame_stack=4,
    learning_rate=0.0003
)

🎯 Use Cases

Game Developers

  • Test game balance
  • Create sophisticated AI opponents
  • Generate training data for game testing

AI Researchers

  • Experiment with reinforcement learning in complex environments
  • Study human-AI interaction through natural language
  • Develop and test new training strategies

Gaming Community

  • Create custom AI training scenarios
  • Share and compare training results
  • Contribute to the evolution of game AI and the Casino of Life framework.

🛠 Advanced Features

Flexible Reward System

from casino_of_life.reward_evaluators import (
    BasicRewardEvaluator,
    StageCompleteRewardEvaluator,
    MultiObjectiveRewardEvaluator
)

# Create custom reward evaluator
reward_system = MultiObjectiveRewardEvaluator([
    BasicRewardEvaluator(health_reward=1.0, damage_penalty=-1.0),
    StageCompleteRewardEvaluator(stage_complete_reward=100.0)
])

Environment Configuration

from casino_of_life.environment import RetroEnv

# Create custom environment with frame stacking
env = RetroEnv(
    game='MortalKombatII-Genesis',
    state='tournament',
    players=2,  # Support for multiplayer
)

Advanced Training Control

from casino_of_life.agents import DynamicAgent
from casino_of_life.client_bridge import RewardEvaluatorManager

# Initialize reward manager
reward_manager = RewardEvaluatorManager()
reward_manager.register_evaluator("tournament", reward_system)

# Create dynamic agent with custom rewards
agent = DynamicAgent(
    env=env,
    reward_evaluator=reward_manager.get_evaluator("tournament"),
    frame_stack=4,
    learning_rate=0.0003
)

🔧 Technical Details

Environment Features

  • Stochastic frame skipping for realistic gameplay
  • Configurable observation processing (84x84 grayscale)
  • 4-frame stacking for temporal information
  • Multi-player support (up to 2 players)
  • Automatic garbage collection for memory management

Reward System

  • Modular reward evaluators
  • Health-based reward calculation
  • Stage completion bonuses
  • Multi-objective reward combination
  • Custom reward scaling
  • Real-time reward adjustment

Training Pipeline

  • Integration with Stable-Baselines3
  • Support for multiple RL algorithms
  • Customizable training parameters
  • Progress tracking and checkpointing
  • Memory-efficient design

📊 Web Interface

Training Dashboard

  • Real-time training metrics
  • Agent management
  • Model versioning
  • Interactive chat
  • Training configuration

API Integration

from casino_of_life.web import TrainingServer

# Start training server
server = TrainingServer()
server.start()

WebSocket connection for real-time updates

@server.on_message
async def handle_training_request(message):
    training_id = await server.start_training(message)
    return {"training_id": training_id}

🤝 Contributing

We welcome contributions! See our Contributing Guide for details.

Development Setup

git clone https://github.com/Cimai-Decentralized-Games/casino-of-life.git
cd casino-of-life
pip install -r requirements.txt

📚 Documentation

Full documentation available at https://docs.cimai.biz

🔗 Links

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • G4F for GPT models and other Providers
  • Stable-Retro for game emulation
  • Stable-Baselines3 for RL implementations
  • The fighting game community for inspiration and support

Made with ❤️ by Cimai Decentralized Games

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

casino_of_life-0.2.4-py3-none-any.whl (28.8 kB view details)

Uploaded Python 3

File details

Details for the file casino_of_life-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: casino_of_life-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 28.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for casino_of_life-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 78aedd25bb0ecd4776848cd72289053abb818c18ef2cc533c8b2d3f3e7fb992a
MD5 9d5d8ec3674c739c2e380bbbd1d23547
BLAKE2b-256 3f4f15fa3fd465ad948a8346a5813ad339f5e262efd37c09b0043e493e7383c3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page